Re: Why would AI want to be friendly?

From: Ken Clements (Ken@Innovation-On-Demand.com)
Date: Mon Sep 11 2000 - 03:32:31 MDT


You can use the term "friendly" in connection with the behavior of
humans and dogs and many other creatures that we understand, but it has
no meaning when applied to behavior that we have no hope of
understanding. I am always amused when SF writers attempt to describe
the motives and behaviors of the SI. As if!!

An SI may decide that it is "friendly" to suddenly halt all humans in
mid thought. No humans would see this as "bad" because no one would
experience it at all (I think I hear a tree falling in the woods
somewhere). Now you might say that it was "bad" anyway because the SI
would have known that if we knew what it was going to do we would not
have liked it. But, what if the SI actually halted all of us because it
decided to make a very "friendly" world for us, but knew that the
planned manipulation of the matter of the galaxy would take several
billion years, and wanted to spare us the subjective wait for paradise
by encoding us now for playback later. What if we cannot make an SI in
the first place, because at some point in development they always go
into some kind of introspective state and halt themselves? These "what
ifs" are nonfalsifiable, and pointless.

We cannot know what an SI will do, if we could, it would not be one. It
all comes down to this basic childhood wisdom:

"It takes one to know one."

-Ken



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:37 MDT