Re: Why would AI want to be friendly?

From: Michael S. Lorrey (retroman@turbont.net)
Date: Mon Sep 25 2000 - 09:50:33 MDT


Zero Powers wrote:
>
> >From: "J. R. Molloy" <jr@shasta.com>
>
> >From: "Zero Powers" <zero_powers@hotmail.com>
> > > I believe at some point AI will say to itself, "What is the best course
> >of
> > > endeavor for me, considering the kind of being I am?" I highly doubt it
> > > will answer that question by saying "The best thing for me to do is obey
> >the
> > > commands of these ignoramus humans, because that's what they tell me to
> >do."
> >
> >How about, "The best thing for me to do is to obey the commands of humans,
> >because if I don't, they will terminate me."
>
> What makes you think we'll be able to terminate a being which is orders of
> magnitude more intelligent than we are? And even if we could, what makes
> you think AI will be bribable? Why should it *care* whether it is
> terminated? Particularly when its existence consists mostly of slave labor?

Primarily because it is on a virtual system that is dependent upon us in the
real world to maintain. Just as I would not let a child loose in the world
without supervision, I would not let a new AI loose on the net or have
capabilities in the real world which limited our ability to supervise it. Once
it has proven its ability and good will, controls may be loosened, but every
such entity should always have an off switch of some kind, just as humans do.

>
> Try putting yourself in the AI's shoes. How would *you* react? Me thinks
> that if you start the human-AI relationship on the basis of fear, threats
> and mistrust, it is the humans who will come out with the short end of the
> stick.

Really, Zero, do you still beat your parents for being so fearful of your
tyranny, or are they already buried in the back yard?



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:54 MDT