Re: Why would AI want to be friendly?

From: J. R. Molloy (jr@shasta.com)
Date: Sun Sep 24 2000 - 15:00:09 MDT


Eliezer S. Yudkowsky writes,

> If, as seems to be the default scenario, all supergoals are ultimately
> arbitrary, then the superintelligence should do what we ask it to, for
lack of
> anything better to do.

That sounds like you're putting yourself in the AI's shoes.

--J. R.

"You can't put yourself in the AI's shoes!"
--Eliezer S. Yudkowsky <sentience@pobox.com>
Sunday, September 24, 2000 12:29 AM



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT