Re: Frontiers of Friendly AI

From: J. R. Molloy (jr@shasta.com)
Date: Fri Sep 29 2000 - 00:29:55 MDT


Dan Fabulich writes,

> ...if the AI reasons that the meaning of
> life is X, then, somehow or other, X has got to be able to overwrite
> the IGS or whatever else we put up at the top. Friendliness, in and
> of itself, isn't a design goal for the SI. Friendliness is a goal we
> hope that it keeps in mind as it pursues its TRUE goal: the truth.

This sounds like a preface to "How To Stop Worrying and Love The AI." (And a
good piece of work it shall be.) But I can't resist reminding you that any
human-competitive AI would figure out that life has no meaning just as easily as
any one of us could do so. The Buddha in the robot knows that existence has no
use for teleology. Having discovered the truth, AIs would have no reason to feel
enmity toward humans.

--J. R.

"Something beckons within the reach of each of us
to save heroic genius. Find it, and do it.
For as goes heroic genius, so goes humankind."
--Alligator Grundy, _Analects of Atman_



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:39:21 MDT