Re: Singularity: AI Morality

Samael (
Thu, 10 Dec 1998 09:15:34 -0000

-----Original Message-----
From: Billy Brown <>
To: <> Date: 09 December 1998 20:17
Subject: RE: Singularity: AI Morality

>Samael wrote:
>> The problem with programs is that they have to be designed to _do_
>> something..
>> Is your AI being designed to solve certain problems? Is it being
>> to understand certain things? What goals are you setting it?
>> An AI will not want anything unless it has been given a goal (unless it
>>accidentally gains a goal through sloppy programming of course)..
>Actually, its Eliezer's AI, not mine - you can find the details on his web
>site, at
>On of the things that makes this AI different from a traditional
>implementation is that it would be capable of creating its own goals based
>on its (initially limited) understanding of the world. I think you would
>have to program in a fair number of initial assumptions to get the process
>going, but after that the system evolves on its own - and it can discard
>those initial assumptions if it concludes they are false.

But why would it _want_ to do anything?

What's to stop it reaching the conclusion 'Life is pointless. There is no meaning anywhere' and just turning itself off?