Re: Otter vs. Yudkowsky

From: sayke (sayke@gmx.net)
Date: Mon Mar 13 2000 - 19:02:38 MST


At 03:57 PM 3/13/00 -0800, Hal wrote:
>D.den Otter, <neosapient@geocities.com>, writes:
>> The problem with >H AI is
>> that, unlike for example nanotech and upload technology in general,
>> it isn't just another tool to help us overcome the limitations of
>> our current condition, but lirerally has a "mind of its own". It's
>> unpredictable, unreliable and therefore *bad* tech from the
>> traditional transhuman perspective. An powerful genie that, once
>> released from its bottle, could grant a thousand wishes or send
>> you straight to Hell.
>> Russian roulette.
>>
>> If you see your personal survival as a mere bonus, and the
>> Singularity as a goal in itself, then of course >H AI is a great
>> tool for the job, but if you care about your survival and freedom --
>> as, I belief, is one of the core tenets of Transhumanism/Extropianism--
>> then >H AI is only useful as a last resort in an utterly desperate
>> situation.
>
>So, to clarify, would you feel the same way if it were your own children
>who threatened to have minds of their own? Suppose they could be
>genetically engineered to be superior to you in some way. Would you
>oppose this, and perhaps even try to kill them?

        if your children were likely going to consume you upon getting big, what
would you do? why make anything that has a decent chance of eating you?
your probably aware of this, but beware "protect the children"
programming... it has a tendancy to distort things...
        i dig otter's concept of "ai as last resort"... tough to implement, though.

sayke, v2.3.05



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:04 MDT