Re: AI:This is how we do it

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Feb 17 2002 - 15:20:27 MST


John Clark wrote:
>
> Any AI worthy of the name must have the ability to grow and improve itself,
> that means making decisions about its hardware and software. If it has
> the power to improve itself it also as the power to destroy itself, thus a
> successful AI must be hardwired at the very lowest level to feel that one
> outcome is good and the other is bad, rather like human opinions of
> good and evil or pleasure and pain. Like us an AI would find matters that
> directly influence survival not boring at all.

You know, I hadn't intended to ask, but I just don't get it. Why are there no
traces at all of Friendly AI concepts in this discussion? I don't just mean
the answers, I mean the underlying systemic concepts used to untangle the
questions. Nobody is distinguishing between supergoals and subgoals. Nobody
is distinguishing between the goal system, the goal system's external
referents, and the AI's reflective design model of its own goal system.
People are asking the same questions they were asking in 1999. I could
understand if there'd been progress and yet not progress toward Friendly AI,
but this is just stasis. Why?

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:39 MST