Re: Maximizing results of efforts Re: Mainstreaming

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Mon Apr 30 2001 - 02:00:11 MDT


Ben Goertzel wrote:
>
> I like your categorization of plans:
>
> > Plans can be divided into three types. There are plans like Bill Joy's,
> > that work only if everyone on the planet signs on, and which get hosed if
> > even 1% disagree. Such plans are unworkable. There are plans like the
> > thirteen colonies' War for Independence, which work only if a *lot* of
> > people - i.e., 30% or 70% or whatever - sign on. Such plans require
> > tremendous effort, and pre-existing momentum, to build up to the requisite
> > number of people.
> >
> > And there are plans like building a seed AI, which require only a finite
> > number of people to sign on, but which benefit the whole world. The third
> > class of plan requires only that a majority *not* get ticked off enough to
> > shut you down, which is a more achievable goal than proselytizing a
> > majority of the entire planet.
>
> For the seed AI to be useful to humans rather than indifferent or hostile to
> them, what we need in my view is NOT an artificially-rigged Friendliness
> goal system,

I protest thy slander. Friendliness is about duplicating really deep
*structural* cognitive properties that are present in human minds but
which are not automatically present in AIs. The actual content is nearly
icing on the cake by comparison. Friendly AI is a self-sustaining funnel
through which we can pour certain types of human complexity into the AI,
such that the pouring is seen by the AI as desirable at any given point in
time. An "artificial" system would be one that let you get away with
making statements or alterations in bad faith. I am sharing a piece of
myself with the AI, not commanding or coercing or dominating or otherwise
diminishing.

> but rather, an organic integration of the seed AI with the
> global brain.
>
> And this, I suspect, is a plan of the second type, according to your
> categorization...

Yep! Sure is! Plans of the second type aren't impossible, just
difficult. Even when all the existing momentum is already there, people
still spend themselves and their lives in the course of actualizing it -
civil rights may have been an idea whose time had come, but a lot of
caring people still broke themselves in the course of making it real. It
seems like rather a harsh requirement to load onto a plan that *already*
requires the creation of true AI... you can only spend your life on one
impossibility, after all.

Besides which, your visualization of organic integration implies growth
occurring on a timescale comparable to the rate of change in human
civilizations, which is not realistic for an entity that can absorb
arbitrarily large multiples of its initial hardware, that has a serial
element speed millions of times faster than neurons, and that can
recursively self-improve. But we've already been through that.

> My belief is that one will work toward Friendly AI better if one spends a
> bit of one's time actually engaged in directly Friendly (compassionate,
> helpful) activities toward humans in real-time. This is because such
> activities help give one a much richer intuition for the nuances of what
> helping people really means.

If this was a method that relied on the programmer being Mother Theresa,
we might as well shoot ourselves now and be done with it, because humans
are by nature imperfect. Programmer-wise, FAI is a plan of the second
kind - it only requires a mostly altruistic programmer to get started.
Actually, it's possible that FAI will work even if you use Saddam Hussein
as the exclusive source of content, which would make FAI a plan of the
third kind. But I can't prove that, and it's not the conservative
assumption, so I'm assuming that the programmer's surface decisions need
to be mostly altruistic, and more importantly, mostly in favor of
correcting errors. Provide that, and the system can renormalize itself to
what it *would* have been if you *had* been Mother Theresa.

> This is an age-old philosophical dispute, of course. Your lifestyle and
> approach to work are what Nietzsche called "ascetic", and he railed against
> ascetisicm mercilessly while practicing it himself.

"Ascetism" is orthogonal to "unity of purpose". I spend an appropriate
amount of time in relaxation... maybe a little less than I would if I were
working for myself, but not much less, and certainly more while writing
and thinking than when I'm performing relatively automatic tasks such as
clearly defined coding goals. The difference is in the goals and the
purpose and the means by which decisions are made. I've resolved the
traditional tormented self-conflict simply by moving entirely to one end
of the spectrum, which is not exactly the Eastern-philosophy solution, but
which works pretty well if you can get away with it.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:00 MDT