Re: Singularity: AI Morality

Samael (
Tue, 15 Dec 1998 10:00:48 -0000

-----Original Message-----
From: Dan Clemmensen <> To: <> Date: 15 December 1998 01:46
Subject: Re: Singularity: AI Morality

>Samael wrote:
>> From: Dan Clemmensen <>
>> >Samael wrote:
>> >>
>> >> But why would it _want_ to do anything?
>> >>
>> >> What's to stop it reaching the conclusion 'Life is pointless. There
>> no
>> >> meaning anywhere' and just turning itself off?
>> >>
>> >Nothing stops any particular AI from deciding to do this. However, this
>> >doesn't stop the singularity unless it happens to every AI.
>> >The singularity only takes one AI that decides to extend itself rather
>> >terminating.
>> >
>> >If you are counting on AI self-termination to stop the Singularity,
>> >have to explain why affects every single AI.
>> I don't expect it will, because I expect the AI's to be prgorammed with
>> strong goals that it will not think about.
>Same problem. This only works if all AIs are inhibited fron extending their
>"strong goals": This si very hard to do using traditional computers.
>you will either permit the AI to program itself, or not. I feel that most
>researchers will be tempted to permit the AI to program itself. Only one
>researcher needs to do this to break your containment system. Do you feel
>A self-extending AI must intrinsically have strong and un-self-modifiable
>to exist, or do you feel that all AI researchers will correctly implement
>feature, or do you have another reason?

  1. One must have a reason to do something before one does it.
  2. If one has an overarching goal, one would modify one's subgoals to reach the overarching goal but would not modify the overarching goal, because one would not have a reason to do so.

Why would an AI modify it's overriding goals? What reason would it have? If it's been programmed with the motive 'Painting things red is good', why would it change that? If it did change that (or at last consider what it meant and why it wanted it), it may well come to the conclusion that 'painting things red is no better than increasing my own intelligence' but why would it want to increase its own intelligence? Why would it think intelligence was important to it? It's just another trait, only as important as you are programmed to think it is.