Re: Why would AI want to be friendly?

From: David Lubkin (lubkin@unreasonable.com)
Date: Sun Sep 24 2000 - 17:12:43 MDT


On 9/24/00, at 12:11 AM, Zero Powers wrote:

>Based upon your belief, I presume, that AI will be so completely unlike
>humanity that there is no way we can even begin to imagine the AI's
>thoughts, values and motivations?

This reminds me of a pretty cool book I think I may have recommended before,
some time in the Pleistocene era. _Superior Beings: If They Exist, How
Would We Know?: Game-Theoretic Implications of Omniscience, Omnipotence,
Immortality, and Incomprehensibility_, by Steven J. Brams (Springer-Varlag,
1983).

Brams analyzes possible interactions between a human and a God or SI using
game theory, determining what the best strategies are for man and God.

It's out of print. www.bibliofind.com reports only two copies for sale.
Better hurry; there's not much flint in the valley....

-- David Lubkin.

______________________________________________________________________________

lubkin@unreasonable.com || Unreasonable Software, Inc. || www.unreasonable.com
a trademark of USI:

> > > > > B e u n r e a s o n a b l e .
______________________________________________________________________________



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT