Re: Maximizing results of efforts Re: Mainstreaming

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Apr 29 2001 - 12:45:02 MDT


Ben Goertzel wrote:
>
> Here's the parable. Suppose you're stuck on a boat in the middle of the
> ocean with a bunch of people, and they're really hungry, and the boat is
> drifting away from the island that you know is to the east. Suppose you're
> the **only** person on the boat who can fish well, and also the only person
> who can paddle well. You may be helping the others most by ignoring their
> short-term needs (fish) in favor of their longer-term needs (getting to the
> island). If you get them to the island, then eventually they'll get to an
> island with lots of food on it, a much better situation than being on a
> sinking boat with a slightly full stomach.
>
> If the other people don't **realize** the boat is drifting in the wrong
> direction, though, because, they don't realize the island is there, then
> what? Then they'll just think you're a bit loopy for paddling so hard. And
> if they know you're a great fisherman, they'll be annoyed at you for not
> helping them get food....

Except I'm *not* a great fisherman. I am a far, far better paddler than I
am a fisherman. There are *lots* and *lots* of people fishing, and nobody
paddling. That is the situation we are currently in.

What is my answer missing? Sociality. Very well, then, let's look at the
social aspects of this.

Your answer makes sense for a small boat. Your answer even scales for a
hunter-gatherer tribe of 200 people. But we don't live in a
hunter-gatherer tribe. We live in a world with six billion people. From
a "logical" perspective, that means that it takes something like AI to get
the leverage to benefit that many people. From a "social" perspective, it
means that at least some of those people will always be ticked off, and
hopefully some of them will sign on.

Plans can be divided into three types. There are plans like Bill Joy's,
that work only if everyone on the planet signs on, and which get hosed if
even 1% disagree. Such plans are unworkable. There are plans like the
thirteen colonies' War for Independence, which work only if a *lot* of
people - i.e., 30% or 70% or whatever - sign on. Such plans require
tremendous effort, and pre-existing momentum, to build up to the requisite
number of people.

And there are plans like building a seed AI, which require only a finite
number of people to sign on, but which benefit the whole world. The third
class of plan requires only that a majority *not* get ticked off enough to
shut you down, which is a more achievable goal than proselytizing a
majority of the entire planet.

Plans of the third type are far less tenuous than plans of the second
type.

And the fact is that a majority of the world isn't about to knock on my
door and complain that I'm doing all this useless paddling instead of
fishing. The fall-off-the-edge-of-the-world types might knock and
complain about my *evil* paddling, but *no way* is a *majority* going to
complain about my paddling instead of fishing. Certainly not here in the
US, where going your own way is a well-established tradition, and most
people are justifiably impressed if you spend a majority of your time
doing *anything* for the public benefit.

As Brian Atkins said:

"The moral of the story, when it comes to actually having a large effect
on
the world: the more advanced technology you have access to, the more
likely
that the "lone crusader" approach makes more sense to take compared to the
traditional "start a whole movement" path. Advanced technologies like AI
give huge power to the individual/small org, and it is an utter waste of
time (and lives per day) to miss this fact."

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:00 MDT