You make good points, Anders, about humans and nanite-AI's having possibly
different niches. However, there may be a period during which we're very
much in the same space. That's the period in which humans could be at risk
if AI/SIs have no regard for our interests. What I'm thinking is that it's
possible, even likely, that SI will be developed before really excellent
robotics. AI's in that case would not be roaming around much physically, but
they could exist in distributed form in the same computer networks that we
use for all kinds of functions crucial to us.
If they need us for doing things physically, we would still have a strong
position. Nevertheless, powerful SI's in the computer networks, could exert
massive extortionary power, if they were so inclined. So I still think it
important that SI researchers pay attention to issues of what values and
motivations are built into SIs.
Upward and Outward!
Max
Max More, Ph.D.
maxmore@primenet.com
http://www.primenet.com/~maxmore
President: Extropy Institute (ExI)
Editor: Extropy
310-398-0375
http://www.primenet.com/~maxmore/extropy.htm