Re: Yudkowsky's AI (again)

den Otter (neosapient@geocities.com)
Thu, 25 Mar 1999 23:12:41 +0100



> From: Eliezer S. Yudkowsky <sentience@pobox.com>
> den Otter wrote:
> >
> > Not necessarily. Not all of us anyway.
>
> The chance that some humans will Transcend, and have their self
> preserved in that Transcendence, while others die in Singularity - is
> effectively zero. (If your self is preserved, you wouldn't kill off
> your fellow humans, would you?)

I assume this is sarcasm?

> We're all in this together. There are
> no differential choices between humans.

Tell that to the other guys...

> > Conclusion: we need a (space) vehicle that can move us out of harm's
> > way when the trouble starts. Of course it must also be able to
> > sustain you for at least 10 years or so. A basic colonization of
> > Mars immediately comes to mind. Perhaps a scaled-up version
> > of Zubrin's Mars Direct plan. Research aimed at uploading must
> > continue at full speed of course while going to, and living on, Mars
> > (or another extra-terrestrial location).
>
> Impractical.

The best option so far. Better than rushing to build a seed AI and then hoping it won't hurt us.

> Probability effectively zero.

Depends entirely on one's motivation. It's definitely *hard* (though certainly not impossible -- Bill Gates could fund a mission right now, probably even without too much financial discomfort), but who said that transcendence would come easy?

> At absolute most you might
> hope for an O'Neill colony capable of supporting itself given nanotech.

Not necessary. After all, you wait until things really start looking bad, which likely means that nanotech is becoming practical, and uploading is only decades away (at most). Even without nanotech one could create a colony on Mars which could operate for years on end.

> Besides, since only *you* are going to Transcend, why should *I* help
> you build a Mars colony?

We [a transhuman group] would cooperate to get into a position where we could all transcend simulataniously and then (perhaps) settle our differences in one way or another. First things first. Unless you happen know of some way to get uploaded all by yourself, cooperation is useful.

> Let us say that I do not underestimate the chance of a world in which
> neither exists, to wit, as close to zero as makes no difference. Given
> a choice between ravenous goo and a Power, I'll take my chances on the
> benevolence of the Power. "Unacceptable" my foot; the probability
> exists and is significant,

No, it is unknown, or extremely slim if you assume that the Power still thinks in understandable terms.

> *unlike* the probability of the goo deciding
> not to eat you.

You can outrun goo, perhaps even contain or destroy it. Try that with a Power.

> > Any kind of Power which isn't you is an unaccepable threat,
> > because it is completely unpredictable from the human pov.
> > You are 100% at its mercy, as you would be if God existed.
> > So, both versions are undesirable.
>
> So only one human can ever become a Power. By golly, let's all start
> sabotaging each other's efforts!

No, let's cooperate.

> Sheesh. There's a reason why humans have evolved an instinct for altruism.

Yes, and it may soon become obsolete.

> > We _must_ be choosy. IMHO, a rational person will delay the Singularity
> > at (almost?) any cost until he can transcend himself.
>
> If AI-based Powers are hostile, it is almost certain, from what I know
> of the matter, that human-based Powers will be hostile as well. So only
> the first human to Transcend winds up as a Power.

See above.

> So your a priori
> chance of Transcending under these assumptions is one in six billion,

Rather something like one in 10,000 (the number increases with a "late" Singularity, and decreases if it's early). Most people, even those in the most developed countries, will simply never know what hit them. Only a select few grasp the concept of SI and Singularity, and even fewer will be in a position to do something with that knowledge. Those are the facts.

> and no more than one can get the big prize. So you'll try to sabotage
> all the Singularity efforts, and they'll try to sabotage you. A snake-pit.

Life is a snake pit, and likely will remain so past the Singularity. Still, there's a significant chance that if a group, drafted form this list, would start working on specific uploading tech, they'd finish 1st simply because these ideas are so ahead of their time. Most people, including most economic/military/scientific "leaders" are wasting their time debating over such insignificant things like cloning and genetically engineered crops. AI, SI and the Singularity are way out there in the realm of SF.

> If only one human can ever become a Power, your chance of being that
> human cannot possibly exceed one in a hundred.

That's a fair assumption. If most of those hunderd agree to do it simultaneously you virtually eliminate that problem.

> Combined with the fact
> that AI transcendence will be possible far earlier, technologically
> speaking,

Yes, this is a tricky problem. The fact that [as a human] you only need to delay AI transcendence for a limited amount of time, not stop it completely, does make the odds somewhat better though.

> In other words, almost regardless of the relative probability of AI
> hostility and AI benevolence, you have a better absolute chance of
> getting whatever you want if you create an AI Power as fast as possible.

A weak AI, yes, but an AI Power is by definition a great gamble, and IMO an unnecessary one.