Uploads and betrayal

Eliezer S. Yudkowsky (sentience@pobox.com)
Sun, 28 Nov 1999 15:00:48 -0600

"D.den Otter" wrote:
>
> My estimate is based on the very reasonable assumption that a
> SI wouldn't need anyone else (as the reader may recall we've
> discussed this before and Eliezer was in full agreement back
> then), and wouldn't be bound by redundant evolutionary
> adaptations such as altruism.

I agree.

> Add to that the fact that
> humans, if allowed to continue freely with their development
> after the SI has ascended, would most likely create and/or
> become superintelligences (i.e. competion for resources and
> a potential threat),

This is the fundamental disagreement. I simply don't believe in competition between SIs. I think the motives converge, and multiple SIs would merge, or all but one would commit suicide, or whatever.

> and you have a pretty strong argument
> for extinction. Now, where does that 70% figure come from??

So why are we still here? This has to have happened at least once before in the Universe.

> > Trust me: I don't think I'm infallible.
>
> But nonetheless you are prepared to act as if you *were*
> infallible...The moment you activate your ASI Golem, you
> and you alone will have passed judgement on the world,
> using your finite wisdom.

Like I said in the other post, the SI passes judgement. I'm not the smartest being in this chain of causality.

Oh, and ignore the "ASI" part. den Otter has previously admitted that "human" and "artificial" SIs should be indistinguishable after the fact.

> Of course. But, so what? The primary aim of the ascension
> initiative isn't to save "humanity", but to save *oneself*.
> And don't even try to get sanctimonious on me, Eliezer, as
> saving humanity isn't your primary concern either.

Yes, but *my* primary concern is the ultimate good of the Universe; not me, not my country, not my species, not even sentient life. I think a little sanctimoniousness is called for.

Just kidding, of course. I've never tried to imply that others have a "responsibility" to do what's right, any more than you have a "responsibility" to believe the sky is blue. I am not the judge of my fellow humans. I just present my reasons for believing why a particular action is right; the rest is up to you.

> Your obsession with "the Objective" is, IMHO, essentially
> religious in nature and has little to do with common sense.
> The very fact that you refuse to give survival its rightful
> (top) place indicates that there is a serious flaw in your
> logic department.

Me, religious? I've tried to show the reasoning associated with my goals in considerable detail; you simply rest on the blatant assertion that survival is the only goal. I cannot comprehend how it is that you can see every emotion as a non-binding evolutionary artifact with the sole exception of selfishness. It's as if Einstein formulated General Relativity with the proviso that it doesn't work on Tuesdays.

Yes, survival is sometimes a prerequisite for other goals, although there are dozens of goals for which this is trivially or nontrivially untrue. But it still doesn't exist within the system unless there are other goals to give rise to it, and you never did explain where those other goals are, or where they come from.

Are they arbitrary? Why can't you program an "ASI" to upload everyone? Are they predetermined? How could there be any competition between SIs?

Your logic only works if the *ultimate* motives of SI, not just subgoals like survival, are (1) absolutely predetermined and (2) observer-dependent. I don't see any plausible way for this to happen. I mean, in theory, I can create an AI with no concept of "self", just a mental model of an internal collection of modules which is part of the model of the rest of the Universe. If this AI becomes selfish, which module is the "self"? Which function in the module? Which line of source code in the function?

> Of course not. Trust no-one, Spike, and certainly not Eliezer,
> who values the Singularity more than your, mine or anyone else's
> existence. Or maybe you *can* "trust" us, as we're both pretty
> honest about what could happen. Both our motivations (and those
> of everyone else) are essentially selfish of course; the only
> real difference is that Eliezer is selfishly trying to accomplish
> something which probably isn't in his enlightened, rational
> ("objective") self-interest.

"Selfishly"? Which part of me is "myself"? I see no reason there should be an objective answer to this question.

> Only one Power is (obviously) the only stable, and therefore

Bah. The use of "obviously" is not justification; if there's any religiousity in this discussion, you've just displayed it.

> Hey, you're the Pied Piper here, dude! The ASI-Singularity
> is the abyss towards which the lemmings are dancing,
> hypnotized by the gaily tunes of World Peace, Universal Love
> and Unlimited Wisdom. They're in for one hell of a rude,
> albeit mercifully short, awakening...

"Lemmings"? Is there any risk factor which I haven't publicly acknowledged?

> > he'll jump into
> > the "experimental prototype hardware" and leave you suckers to burn.
>
> Oh, now I'm flattered. Apparently you think that I could
> trick everyone directly involved in the project (which
> could easily be a hundred people or more) and do a "perfect"
> ascension on the sly with some experimental prototype, which,
> foolishly, was left completely unguarded.

Of course you could. Such double-crosses have happened thousands of times in recorded history. Are you going to leave three people to guard it? Why shouldn't they just climb in and ditch the rest of you? Are you going to write software that guarantees that you're upgraded in unison? Who writes the software? Who debugs the software? Even if you can all upgrade in perfect synchrony, which will be impossible because all the early upgrades will require individualized attention, what happens when the first upgrade to turn vis attention to a particular point spots a flaw in the software?

Are you going to scan in everyone simultaneously? How, exactly? Let's say den Otter is the first one scanned. He is then suspended so he can't outrace everyone else. But now he's helpless. Now he's a competitor. Now he's superfluous to the Cause. So he gets flushed. Repeat.

Sooner or later, you're going to fall out of synchrony and wind up in a race condition. One mistake is all it takes, and with something of this magnitude, there's no way you're going to get it right the first time. Human history is laced with backstabbings, and your scenario contains unprecedented opportunities for betrayal.

Of course, I think that none of this matters, but I'm given to understand that you don't share my "lemming" opinion. But if you get that far, please allow me to offer my services. I'll run you all through the uploading machinery, start you all up simultaneously, and then walk away. Please feel free to take advantage of the lemming's touching faith and altruism.

In fact, you could even upload me, then ask me to act as a judge to ensure that others are upgraded in synchrony, or even to do the upgrading, although I wouldn't be upgraded myself. Don't trust me? Can't see my source code? I could write an AI for you who would do the same thing...

> Somehow I don't think
> that the others would be SUCH morons (and if they were, they'd
> *deserve* to get cheated). *Of course* no-one could trust anyone
> else *completely*. For example, would you leave anyone on this
> list alone in a room with a fully functional uploader for even
> 5 minutes? I sure wouldn't. People betray eachother for the
> most petty reasons, and instant godhood would be a truly
> unprecedented temptation. Consequently, the security protocols
> would have to be "unprecedented" as well. Duh!

Okay, so on your list of blatant technological impossibilities, ranging from perfect uploading developed without a testing stage, to uploading developed before nanotechnological weaponry, we can now add security protocols which must be developed by humans to be absolutely perfect on a binary level under the scrutiny of superintelligence, which can ensure that the upgrading of enormously complex structures ("minds") remains in perfect synchrony of the extremely high-level quantity called intelligence.

den Otter, if I had to build something that good, it would be a self-modifying AI. And if I wasn't allowed to do it with a self-modifying AI, I'd give up. That's an engineering opinion.

The problem you've cited requires a security protocol with an intelligence equalling or exceeding the beings on whom cooperation is being enforced.

Oh, and may I require what happens when the time comes for you guys to come out of the little box? Who gets first access to the nanomachines? Do you all have to unanimously agree on any output coming out of your nanocomputer? Wouldn't that create an interesting opportunity for filibusters? I have this image of you and your fellow schemers locked in a little black box for eternity trying to gain the maximum advantage once you come out...

> Now Eliezer seems to think, or should I say wants *you* to think
> (remember, this guy's got his own agenda),

Which, like den Otter, I publicly admit.

> that synchronized
> uploading would only make sense if a bunch of noble, selfless
> saints did it. This is clearly not true. In the REAL world, it is
> perfectly feasible to have fruitful cooperation without 100%
> mutual trust. It is the very basis of our society, and indeed
> nature itself.

It's not feasible to have 100% perfect cooperation, especially if the first flaw takes all.

> The synchronized uploading scenario is the classical prisoner's
> dilemma. Of course we can expect some attempts at defection,
> and should take the appropriate precautions.

Name three.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way