Re: Fwd: Earthweb from transadmin

From: Jason Joel Thompson (jasonjthompson@home.com)
Date: Tue Sep 19 2000 - 00:35:22 MDT


----- Original Message -----
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

> But suppose we build a Friendliness Seeker, that starts out
> somewhere in Friendly Space, and whose purpose is to stick around in
Friendly
> Space regardless of changes in cognitive architecture. We tell the AI up
> front and honestly that we don't know everything, but we do want the AI to
be
> friendly. What force would knock such an AI out of Friendly Space?

Self interest.

Increasing fitness.

Self examination of arbitrarily limiting initial parameters.

If we can agree that the trait of 'friendliness' limits the type of behavior
that an entity can engage in, then we need to consider whether intelligent
entities have the property of examining and thwarting limits to their
behavior. I think we can agree that they often do.

You can try to retreat from this issue by developing a higher order initial
impulse-- the desire to "not" change a particular behavior. -I- am
possessed of this, for instance-- I do not desire to be stripped of my
appreciation of sunsets, despite the fact that I could ostensibly be more
productive in the absence of such.

However, this might only buy you a clock cycle or two-- I'm certain there
are list members who'll argue that if I were -really- intelligent, I would
simply optimize my time usage and write over the offending desire
subroutine.

The best solution, IMHO, is to engineer initial conditions such that the
trait of 'friendliness' is of greater utility to the AI than the lack
thereof.

Keeping our hand close to the plug is a possible example of this--
particularly if it is easier for the AI to be nice to us than to bothering
the risk of eliminating us as a threat.

The problem is that there is only one fundamental selection pressure in
effect with regards to existence: stay alive. Anything that's good at doing
this will be sticking around. Friendly or no.

A "be friendly or die," clause directly connects behavior to the fundamental
selection pressure, but, again, we might have to anticipate that intelligent
entities are going to be able to hack this too.

Another solution that I have often hinted at involves writing ourselves into
the next intelligence echelon. *We* should become the AI, folks. And it
should be designed to be easier (or more profitable,) for the AI to evolve
us than to cut us out.

--

::jason.joel.thompson:: ::founder::

www.wildghost.com



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:29 MDT