RE: Maximizing results of efforts Re: Mainstreaming

From: Ben Goertzel (
Date: Sun Apr 29 2001 - 22:01:27 MDT

Eli, this was a very interesting and well-thought-out reply.

I like your categorization of plans:

> Plans can be divided into three types. There are plans like Bill Joy's,
> that work only if everyone on the planet signs on, and which get hosed if
> even 1% disagree. Such plans are unworkable. There are plans like the
> thirteen colonies' War for Independence, which work only if a *lot* of
> people - i.e., 30% or 70% or whatever - sign on. Such plans require
> tremendous effort, and pre-existing momentum, to build up to the requisite
> number of people.
> And there are plans like building a seed AI, which require only a finite
> number of people to sign on, but which benefit the whole world. The third
> class of plan requires only that a majority *not* get ticked off enough to
> shut you down, which is a more achievable goal than proselytizing a
> majority of the entire planet.
> Plans of the third type are far less tenuous than plans of the second
> type.

Here is my sense of things, which I know is different than yours.

There's the seed AI, and then there's the "global brain" -- the network of
computing and communication systems and humans that increasingly acts as a
whole system.

For the seed AI to be useful to humans rather than indifferent or hostile to
them, what we need in my view is NOT an artificially-rigged Friendliness
goal system, but rather, an organic integration of the seed AI with the
global brain.

And this, I suspect, is a plan of the second type, according to your

> And the fact is that a majority of the world isn't about to knock on my
> door and complain that I'm doing all this useless paddling instead of
> fishing. The fall-off-the-edge-of-the-world types might knock and
> complain about my *evil* paddling, but *no way* is a *majority* going to
> complain about my paddling instead of fishing. Certainly not here in the
> US, where going your own way is a well-established tradition, and most
> people are justifiably impressed if you spend a majority of your time
> doing *anything* for the public benefit.

My belief is that one will work toward Friendly AI better if one spends a
bit of one's time actually engaged in directly Friendly (compassionate,
helpful) activities toward humans in real-time. This is because such
activities help give one a much richer intuition for the nuances of what
helping people really means.

This is an age-old philosophical dispute, of course. Your lifestyle and
approach to work are what Nietzsche called "ascetic", and he railed against
ascetisicm mercilessly while practicing it himself. I'm fairly close to an
ascetic by most standards -- I spend most of my time working on abstract
stuff, and otherwise I don't do all that much else aside from play with my
kids -- but, yes, I admit it, I spend some of my time indulging myself in
the various pleasures of the real world ;p ... and some of my time doing
stuff like teaching in my kids' schools, which is fun and useful to the
kids, but doesn't use my unique talents as fully as working on AI. I think
my work is the better, not the worse, for these "diversions".... But
perhaps it wouldn't be so for you.... Perhaps the philosophical dispute
over the merits of asceticism just comes down to individual differences in
personality ;p


> As Brian Atkins said:
> "The moral of the story, when it comes to actually having a large effect
> on
> the world: the more advanced technology you have access to, the more
> likely
> that the "lone crusader" approach makes more sense to take compared to the
> traditional "start a whole movement" path. Advanced technologies like AI
> give huge power to the individual/small org, and it is an utter waste of
> time (and lives per day) to miss this fact."
> -- -- -- -- --
> Eliezer S. Yudkowsky
> Research Fellow, Singularity Institute for Artificial Intelligence

This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:00 MDT