Re: Why would AI want to be friendly?

From: Jason Joel Thompson (jasonjthompson@home.com)
Date: Wed Sep 06 2000 - 18:26:45 MDT


----- Original Message -----
From: "Eliezer S. Yudkowsky" <sentience@pobox.com>

> Jason Joel Thompson wrote:
> >
> > Isn't an essential component of superior intelligence the ability to
detect
> > and route around factors that limit its efficacy?
>
> You're confusing supergoals and subgoals. Past a certain level of
> intelligence, you can specify supergoals but not subgoals - you can tell
the
> AI what to do, but not how to do it. You can specify the AI's end goals,
but
> within those goals it's a free entity and it does whatever it thinks will
> accomplish those goals.
Fellow, Singularity Institute for Artificial Intelligence

I wasn't suggesting the two were equivalent-- you're reading my next
argument before I've made it.

Actually, I haven't made any arguments, yet. I've asked a leading question.

Now, perhaps another:

Isn't an essential component of superior intelligence the ability to detect
and alter it's 'supergoals?'

--

::jason.joel.thompson:: ::founder::

www.wildghost.com



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:37:23 MDT