Re: Why would AI want to be friendly?

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Tue Sep 05 2000 - 12:12:58 MDT


hal@finney.org wrote:
>
> Before it is a super-intelligence, it will be as smart as a genius human.
>
> Before it is a genius, it will be as smart as an ordinary human.
>
> Before it is a human, it will be as smart as a very dumb human being
> (albeit perhaps with good coding skills, an idiot savant of programming).
>
> And before that, it will be a smart coding aid.

(Assuming that none of the phases named skip by too fast.)

> In those early phases, you are going to have to direct it. It could
> no more choose its own goals than Eurisko or Cyc.

Yes.

> The very notion of a system which "chooses its own goals" seems
> contradictory. Choice presupposes a ranking system, which implies a
> pre-ordained goal structure. Choosing your own goals is like proving
> your own axioms, or lifting yourself by your bootstraps. It's not going
> to get off the ground.

Then there's no problem, is there?

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:13 MDT