Re: Thinking about the future...

Robin Hanson (
Mon, 26 Aug 96 12:29:42 PDT

Max More writes:
>> I think it would be unlikely that we create successors
>>that out-compete us, most likely they will inhabit a somewhat different
>>ecological/memetic niche that will overlap with ours; competition a
>.... However, there may be a period during which we're very
>much in the same space. That's the period in which humans could be at risk
>if AI/SIs have no regard for our interests. What I'm thinking is that it's
>possible, even likely, that SI will be developed before really excellent
>robotics. AI's in that case would not be roaming around much physically, but
>they could exist in distributed form in the same computer networks that we
>use for all kinds of functions crucial to us.
>If they need us for doing things physically, we would still have a strong
>position. Nevertheless, powerful SI's in the computer networks, could exert
>massive extortionary power, if they were so inclined. So I still think it
>important that SI researchers pay attention to issues of what values and
>motivations are built into SIs.

I think you attribute too much to values and too little to social
institutions. The reason the U.S. is not usually at war with Mexico
has little to do with how much we each personally value the lives of
Mexicans. Similarly the reason we tall folks have not slaughtered all
you short folks has little to do with how much we like you.

Rather, these phenomena are mostly due to social institutions and
dependencies - the fact that nations deter via military strength, that
laws deter crime within a nation, and that we all need each other in
various ways.

I think we have little to fear from AIs who are not enslaved, and who
are integrated into the rest of our social institutions. I.e., owing
property, being allied with different nations and corporations, etc.

You of course do have something to fear from AIs who can do you job
better than you, and are willing to do it for less.

Robin Hanson