Re: Singularity: AI Morality

Samael (Samael@dial.pipex.com)
Mon, 14 Dec 1998 10:36:49 -0000

-----Original Message-----
From: Dan Clemmensen <Dan@Clemmensen.ShireNet.com> To: extropians@extropy.com <extropians@extropy.com> Date: 12 December 1998 01:04
Subject: Re: Singularity: AI Morality

>Samael wrote:
>>
>> But why would it _want_ to do anything?
>>
>> What's to stop it reaching the conclusion 'Life is pointless. There is
no
>> meaning anywhere' and just turning itself off?
>>
>Nothing stops any particular AI from deciding to do this. However, this
>doesn't stop the singularity unless it happens to every AI.
>The singularity only takes one AI that decides to extend itself rather than
>terminating.
>
>If you are counting on AI self-termination to stop the Singularity, you'll
>have to explain why affects every single AI.

I don't expect it will, because I expect the AI's to be prgorammed with strong goals that it will not think about.

Samael