Samael wrote:
> Dan Clemmensen wrote:
>>Same problem. This only works if all AIs are inhibited fron extending their >>"strong goals": This si very hard to do using traditional computers. Essentially, >>you will either permit the AI to program itself, or not. I feel that most AI >>researchers will be tempted to permit the AI to program itself. Only one such >>researcher needs to do this to break your containment system. Do you feel that >>A self-extending AI must intrinsically have strong and un-self-modifiable goals >>to exist, or do you feel that all AI researchers will correctly implement this >>feature, or do you have another reason?
Presumably, the AI knows that it applies logic, reasoning, creativity, and the other attributes of "intelligence" to achieving it "overriding goals." Therefore, increasing its intelligence is an obvious subgoal to just about any other goal.
However, you still haven't told me why you think that some AI researcher somewhere won't use "increase your intelligence" as an overriding goal, or add goal modification to an AI design.