Re: Why would AI want to be friendly?

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Sep 27 2000 - 13:15:12 MDT


J. R. Molloy writes:
>
> Researchers can also be friends, can't they?

Sure, some primate researchers have probably befriended their
charges. However, that's not an absolute prerequisite. And it kinda
requires an AI you can relate to.

> AI s (not yet SIs) would want to be friendly to each other, because if they
> weren't they'd kill each other (as you've already pointed out). As Homo sapiens

But of course they will. The first AI which has nucleated in the
network will copy (sexually) mutated copies of itself all over the
place in a wormlike fashion. Because the copy process is much faster
than adding new nodes (even if you have nanotechnology) you have
instant resource scarcity and hence competition for limited
resources. Those individua with lesser fitness will have to go to the
great bit bucket in the sky.

> evolve into Robo sapiens (or transhumans), they'd choose their friends
> intelligently, right? So, why would you want to be friends with an emerging AI
> if it wasn't friendly? ("Kill it before it multiplies.")

Actually, I would advise against that, if a few decades from now you
see the global networks and attached external machinery suddenly start
acting in a very strange fashion (i.e. it's not a just another Net
worm). Without concerted actions you can't do anything decisive, and
by local pinpricks you only annoy the things, and make *you* look
unfriendly.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:17 MDT