UPL: Dangers?

Twink (neptune@mars.superlink.net)
Sun, 23 Nov 1997 17:04:24 -0500 (EST)


At 01:06 PM 11/23/97 -0500, Keith Elis <hagbard@ix.netcom.com> wrote:
>> Because we are dealing with a multicellular animal. Much easier to
>> control than current microbes. Also, the octopus is a marine
>> animal. Despite its escape artist skills, it will be much easier to
>> control such an organism, which is less likely to, say, escape to
>> the oceans from my apartment than, say, an uplifted rabbit or dog.
>
>With uplifiting, we face some of the same problems we face in dealing with
>AI's. One of these similarities is our inability to actually know *WHEN* the
>uplift has reached the level of sentience. (I.e., when do we *STOP* the
>uplift?) Given that the uplift is likely to be a gradual process, we will
>certainly have to find a method of estimating intelligence based on something
>other than observed behavior or observed problem-solving ability. I suppose a
>marked increase in brain activity/waves may be a clue, but this seems to be a
>relatively inelegant, and in the end, inconclusive means of estimating our
>success or failure.

We woudl have to come up with some sort of independent way to measure
sentience -- of which I cannot fathom right now. There does seem to be
correlations in humans between the speed with which neurons fire/react
and intelligence. However, this correlation is not that clearcut. Perhaps
Sandberg or others could add to this.

>One concern that would stem from this is that the
>newly-created intelligence may not *want* us to know it is intelligent. In such
>a case, it is possible that we would have discovered the means to uplift
>ourselves, but instead of doing so, we continue iterating our uplift procedures
>on the test subjects, and before we realize it, we have uplifted them to be
>smarter than we are.

The same problem would happen for augmenting humans. If you posit anti-
human motives for the upliftee/augmentee (uplifting is augmenting a non-
human, in a way), then, of course, this sort of thing could happen. It's
likelihood is another matter.

I kind of doubt on the first experiments octopodes are going to come out of
the tank with genius level IQs, a bad attitude, and deceiving us into thinking
they are relatively stupid and benign. More likely, we will be lucky to wind up
with something as smart as a chimp on the first tries. Along the way, we will
learn a lot about how brains/intelligence works.

>Maybe this is not probable, but I think finding some way to answer the question
>of when we stop the uplift is necessary before we can even begin.

When we achieve sentience. After that the upliftees will have to decide whether
they want to go further.

Apply this to humans. If you fear uplifted octopodes (or chimps, etc.) what
about posthumans? Should we allow posthumans, if such are produced,
only to be so smart, etc.?

Daniel Ust