Re: Singularity: AI Morality

Dan Clemmensen (Dan@Clemmensen.ShireNet.com)
Wed, 09 Dec 1998 19:13:13 -0500

Samael wrote:
>
>
> The problem with programs is that they have to be designed to _do_
> something.
>
> Is your AI being designed to solve certain problems? Is it being designed
> to understand certain things? What goals are you setting it?
>
> An AI will not want anything unless it has been given a goal (unless it
> accidentally gains a goal through sloppy programming of course).
>
If computer-based intelligence of any type is possible, then it's very likely that different researchers will choose different goals. IMO, at least one researcher will use the goal or directive "enhance your intelligence." I feel that this is very likelym since after all that is the goal the researcher was pursueing in the first place. Unfortunately, that's all it takes to initiate the singularity, given the availability of a large base of computers. Note that this reasoning is not particularly dependent on the nature of the AI's programming, but only on its ability to increase its effective intelligence by using more computing power.