From: Billy Brown <email@example.com>
To: firstname.lastname@example.org <email@example.com> Date: 09 December 1998 20:17
Subject: RE: Singularity: AI Morality
>> The problem with programs is that they have to be designed to _do_
>> Is your AI being designed to solve certain problems? Is it being
>> to understand certain things? What goals are you setting it?
>> An AI will not want anything unless it has been given a goal (unless it
>>accidentally gains a goal through sloppy programming of course)..
>Actually, its Eliezer's AI, not mine - you can find the details on his web
>site, at http://huitzilo.tezcat.com/~eliezer/AI_design.temp.html.
>On of the things that makes this AI different from a traditional
>implementation is that it would be capable of creating its own goals based
>on its (initially limited) understanding of the world. I think you would
>have to program in a fair number of initial assumptions to get the process
>going, but after that the system evolves on its own - and it can discard
>those initial assumptions if it concludes they are false.
But why would it _want_ to do anything?
What's to stop it reaching the conclusion 'Life is pointless. There is no
meaning anywhere' and just turning itself off?
What's to stop it reaching the conclusion 'Life is pointless. There is no meaning anywhere' and just turning itself off?