Re: Why would AI want to be friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Sat Sep 23 2000 - 16:23:06 MDT


From: "Zero Powers" <zero_powers@hotmail.com>
> I believe at some point AI will say to itself, "What is the best course of
> endeavor for me, considering the kind of being I am?" I highly doubt it
> will answer that question by saying "The best thing for me to do is obey the
> commands of these ignoramus humans, because that's what they tell me to do."

How about, "The best thing for me to do is to obey the commands of humans,
because if I don't, they will terminate me."

> When you say "the initial suggestions will get junked anyway" if there is an
> objective morality, then you are pretty much conceding that ultimately
> there'll be little assurance of human-friendly AI, right?

No, because in that case, objective morality will assure human-friendly AI.

> >Remember, an SI isn't going to be tormented by the pointlessness of it all
> >because it doesn't have the be-tormented-by-the-pointlessness-of-it-all
> >hardware.
>
> What makes you think our torment at pointlessness resides in the hardware?
> Seems pretty obvious to me its in the *software*. In terms of hardware,
> ours isn't much different than monkeys. Somehow I just don't see them as
> suffering torment over pointlessness.

Points are pointless.

--J. R.



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:45 MDT