>From: "J. R. Molloy" <jr@shasta.com>
>From: "Zero Powers" <zero_powers@hotmail.com>
> > I believe at some point AI will say to itself, "What is the best course
>of
> > endeavor for me, considering the kind of being I am?" I highly doubt it
> > will answer that question by saying "The best thing for me to do is obey
>the
> > commands of these ignoramus humans, because that's what they tell me to
>do."
>
>How about, "The best thing for me to do is to obey the commands of humans,
>because if I don't, they will terminate me."
What makes you think we'll be able to terminate a being which is orders of
magnitude more intelligent than we are? And even if we could, what makes
you think AI will be bribable? Why should it *care* whether it is
terminated? Particularly when its existence consists mostly of slave labor?
Try putting yourself in the AI's shoes. How would *you* react? Me thinks
that if you start the human-AI relationship on the basis of fear, threats
and mistrust, it is the humans who will come out with the short end of the
stick.
-Zero
Learn how your computer can earn you money while you sleep!
http://www.ProcessTree.com/?sponsor=38158
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.
Share information about yourself, create your own public profile at
http://profiles.msn.com.
This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:47 MDT