Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Sep 06 2000 - 16:24:03 MDT


Jason Joel Thompson wrote:
>
> Isn't an essential component of superior intelligence the ability to detect
> and route around factors that limit its efficacy?

You're confusing supergoals and subgoals. Past a certain level of
intelligence, you can specify supergoals but not subgoals - you can tell the
AI what to do, but not how to do it. You can specify the AI's end goals, but
within those goals it's a free entity and it does whatever it thinks will
accomplish those goals.

Incidentally, I note that nobody else in this interesting-if-pointless
discussion seems to be keeping track of the distinction between supergoals and
subgoals. As flaws in discussion of SIs go, that flaw is pretty common and
it's more than enough to render every bit of the reasoning completely useless.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:23 MDT