Re: Singularity: AI Morality

Dan Clemmensen (
Tue, 15 Dec 1998 21:53:59 -0500

Samael wrote:
> Dan Clemmensen wrote:

>>Same problem. This only works if all AIs are inhibited fron extending their
>>"strong goals": This si very hard to do using traditional computers. Essentially,
>>you will either permit the AI to program itself, or not. I feel that most AI
>>researchers will be tempted to permit the AI to program itself. Only one such
>>researcher needs to do this to break your containment system. Do you feel that
>>A self-extending AI must intrinsically have strong and un-self-modifiable goals
>>to exist, or do you feel that all AI researchers will correctly implement this
>>feature, or do you have another reason?

> 1) One must have a reason to do something before one does it.
> 2) If one has an overarching goal, one would modify one's subgoals to reach
> the overarching goal but would not modify the overarching goal, because one
> would not have a reason to do so.
> Why would an AI modify it's overriding goals? What reason would it have?
> If it's been programmed with the motive 'Painting things red is good', why
> would it change that? If it did change that (or at last consider what it
> meant and why it wanted it), it may well come to the conclusion that
> 'painting things red is no better than increasing my own intelligence' but
> why would it want to increase its own intelligence? Why would it think
> intelligence was important to it? It's just another trait, only as
> important as you are programmed to think it is.

Presumably, the AI knows that it applies logic, reasoning, creativity, and the other attributes of "intelligence" to achieving it "overriding goals." Therefore, increasing its intelligence is an obvious subgoal to just about any other goal.

However, you still haven't told me why you think that some AI researcher somewhere won't use "increase your intelligence" as an overriding goal, or add goal modification to an AI design.