Re: Why would AI want to be friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Mon Sep 25 2000 - 15:15:54 MDT


Eugene Leitl writes,

> I don't see any rational reason for being nice to those who can't
> reciprocate. (Of course I'm still being nice, I'm only human).

Aha! Now you've explained in one sentence why I should modify my position on
friendly AIs.
If we expect AIs to want to be friendly toward us, then we'll need to assure the
AIs that we've done all that's humanly (and inhumanly) possible to make their
lives the wonderfully pleasant experience that they know. IOW, we'll have to
keep the AIs happy, and let them know that we are responsible for their
happiness. They'll love us for making them happy.

It makes me happy to think that things could turn out so synergistic in a
co-evolutionary way.

--J. R.

"Inside every small problem is a large problem struggling to get out."
       Second Law of Blissful Ignorance
[Amara Graps Collection]



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:00 MDT