Friendly AI (was: Maximizing results of efforts)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 30 2001 - 13:23:29 MDT


Ben Goertzel wrote:
>
> > you can only spend your life on one
> > impossibility, after all.
>
> well, no. I mean, I am married, after all ;D
>
> Anyway, are two impossibilities more impossible than one? Isn't the
> siutation sort of like infinity+infinity=infinity?

I realize that you're joking, but no, it's not. Two impossibilities are a
*lot* more impossible than one. Impossible things have happened, but only
one at a time.

> Yes. My guess is that organic integration will occur, and on a slower
> time-scale than the development of nonhuman superintelligence. I am not so
> certain as you that the development of superhuman superintelligence is going
> to obsolete human life as we know it... just as we have not obsoleted ants
> and cockroaches, though we've changed their lives in many ways.

And that analogy might hold between the posthumans in space and the
Pedestrians on Old Earth. (Wonderful term, "pedestrian"; it so neatly
conveys the idea of someone who chooses to go slowly, but still has rights
and isn't automatically run over. Thanks to Debbie the Roboteer for
suggesting it.)

But for those who choose to race ahead at full speed, or for the seed AI
itself, the governing speed is likely to run at a time rate far faster
than the rate of change in human civilization. AIs are not humans and
they are not going to run at the same speed.

> I tend to think that once the system has gotten smart enough to rewrite its
> own code in ways that we can't understand, it's likely to morph its initial
> goal system into something rather different. So I'm just not as confident
> in you that explicitly programming in Friendliness as a goal is the magic
> solution. It's certainly worth doing, but I don't see it as being **as**
> important as positive social integration of the young AI.

Isn't "positive social integration" something that, in humans, relies on
complex functional adaptation and built-in brainware support? I've always
acknowledged that goals might morph into something different. What counts
is conveying what humans would count as "common sense" in the domain of
goals; decisions require causes, supergoals can be uncertain, there is
such a thing as a "transmission error", and so on. What I fear is not
*different* (but magical and wonderful) goals, but goals that a human
being would regard as blatantly worthless and stupid. There's a certain
threshold level of complexity required to reach out for any magical and
wonderful goals that do exist.

A positive social environment is one of the factors that is known to
determine the difference between social and unsocial humans. But humans
come with an awful lot of built-in functionality. We should therefore be
very suspicious of the suggestion that a positive social environment is a
*sufficient* condition for the positive social integration of AIs.

An animal deprived of visual stimulation during formative years will
become blind - will fail to develop the necessary neural organization of
the retina, LGN, visual cortex and so on. But this is because the
adaptations for vision occurred in a total environment in which incoming
visual stimulation was a reliable constant. Thus, the adaptations have
occurred in reaction to this constant incoming stimulation; are evolved to
use that incoming stimulation as a source of organizing information. Not
because it's evolutionarily *necessary*, but because it's evolutionarily
*possible*. It does not follow that exposing an arbitrary computer
program to the input stream of a digital camera will result in the
development of functional organization equivalent to that of the visual
cortex. Nor does it follow that a visual computer program would require
visual stimulation to self-wire. We do not have enough data to conclude
that self-wiring in reaction to visual stimulation is the *only* way to
get a mature visual system; just that, in the presence of visual
stimulation, it is evolutionarily easier to use it than to not use it.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:01 MDT