Dan C. wrote:
>> >Since the SI will be vastly more intelligent than humans, IMO we may not
>> >be able to comprehend its motivations, much less predict them. The SI will
>> >be so smart that its actions are constrained only by the laws of physics,
>> >and it will choose a course of action based on its motivations.
>>
>> Why do you assume such a strong association between intelligence and
>> motivations? It seems to me that intelligence doesn't change one's
>> primary purposes much at all, though it may change one's tactics as one
>> better learns the connection between actions and consequences.
>
>Human motivation is less complex than the motivations of ants?
You lost me here.
>Robin, the reason I produced the list of motivations and actions was
>to attempt to provide specific examples. Can you reccomend a way for
>me, or another human or group of humans or construct of humans (short of
>an SI) to reliably assign probabilities to that list?
Dan had written:
>...
>M: SI wants to maximize its power long-term
>A: SI sends replicator probes to in all directions.
>
>M: SI wants to die.
>A: SI terminates itself. ...
It seems to me that the motivations of future entities can be predicted
as a combination of
1) Selection effects. What motivations would tend to be selected for
Robin Hanson
hanson@econ.berkeley.edu http://hanson.berkeley.edu/
RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884
140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-2627