Re: IA vs. AI was: longevity vs singularity

den Otter (
Mon, 2 Aug 1999 00:53:15 +0200

> From: Eliezer S. Yudkowsky <>

> > The first uploads would no doubt be animals (of increasing complexity),
> > followed by tests with humans (preferably people who don't grasp
> > the full potential of being uploaded, for obvious reasons).
> You have to be kidding. Short of grabbing random derelicts off the
> street, there's simply no way you could do that. Or were you planning
> to grab random derelicts off the street?

The animal tests alone would make the procedure considerably safer (if the procedure works for an ape, it will most likely work for a human too), and it's really no problem to find relatively "clueless" yet eager volunteers; I could very well imagine that serious gamers would be lining up to get uploaded into the "ultimate game", ExistenZ style, for example. If you make the procedure reversible, i.e. leave the brain intact and switch the consciousness between the brain and the machine, there doesn't have to be much risk involved.

> And did I mention that by the
> time you can do something like that, in secret, on one supercomputer,
> much less run 6000 people on one computer and then upgrade them in
> synchronization, what *I* will be doing with will -

Yes, I know the odds are in your favor right now, but that could easily change if, for example, there were a major breaktrough in neurohacking within the next 10 years. Or programming a sentient AI may not be that easy after all. Or the Luddites may finally "discover" AI and do some serious damage. Or...Oh, what the hell, the upload path is simply something that *has* to be tried. If it works, great! If it doesn't...well, nothing lost, eh?

> Besides, there's more to uploading than scanning. Like, the process of
> upgrading to Powerdom. How are you going to conduct those tests, hah?

After having uploaded, you could, for example, run copies of yourself at very slow speed (in relation to your "true" self), and fool around with different settings for a while, before either terminating, or merging with, the test copy. If the tests are successful, you can upgrade and use copies of your new state, again running at relatively slow speed, to conduct the next series of tests. Ad infinitum.

Of course this wouldn't eliminate all risks, but it *would* reduce them somewhat, and of course this method could be refined considerably (perhaps free will could be "surgically removed" from the test copies, or something like that).

> > Well, IMHO the scan version of uploading is utterly useless from the
> > individual's point of view (copy paradox and all that), and I certainly
> > wouldn't waste any time on this method. In fact, I'm just as opposed
> > to it as I am to conscious AI.
> So. No backups.

No, I'd like to start out as an original, but after having uploaded I might very well change my mind about copies. Copies are certainly useful for testing, as described above.

> > Forget PC and try to think like a SI (a superbly powerful
> > and exquisitely rational machine).
> *You* try to think like an SI. Why do you keep on insisting on the
> preservation of the wholly human emotion of selfishness?

Because a minimal amount of "selfishness" is necessary to survive. Survival is the basic prerequisite for everything else.

> I don't
> understand how you can be so rational about everything except that!
> Yes, we'll lose bonding, honor, love, all the romantic stuff. But not
> altruism. Altruism is the default state of intelligence. Selfishness
> takes *work*, it's a far more complex emotion.

Well, so what? *Existence* takes work, but that doesn't make it irrational.

> I'm not talking about
> feel-good altruism or working for the benefit of other subjectivities.
> I'm talking about the idea of doing what's right, making the correct
> choice, acting on truth.

What you describe is basically common sense, not altruism (well, not according to *my* dictionary, anyway). Perhaps you should use some other term to avoid confusion.

> Acting so as to increase personal power
> doesn't make rational sense except in terms of a greater goal.

Yes, and the "greater goal" is survival (well, strictly speaking it's an auxiliary goal, but one that must always be included).

> I *know* that we'll be bugs to SIs. I *know* they won't have any of the
> emotions that are the source of cooperation in humans. I *still* see so
> many conflicting bits of reasoning and evidence that you might as well
> flip a coin as ask me whether they'll be benevolent. That's my
> probability: 50%.

Yes, ultimately it's all just unknowable. BUT, if we assume that the SIs will act on reason, the benevolence-estimate should be a lot lower than 50% (IMO).

> If Powers are hungry, why haven't they expanded continually, at
> lightspeed, starting with the very first intelligent race in this
> Universe? Why haven't they eaten Earth already? Or if Powers upload
> mortals because of a mortally understandable chain of logic, so as to
> encourage Singularities, why don't they encourage Singularities by
> sending helper robots? We aren't the first ones. There are visible
> galaxies so much older that they've almost burned themselves out. I
> don't see *any* reasonable interpretation of SI motives that is
> consistent with observed evidence. Even assuming all Powers commit
> suicide just gives you the same damn question with respect to mortal aliens.

I dunno, maybe we're the first after all. Maybe intelligent life really is extremely rare. I think Occam's Razor would agree.

> And, speaking of IA motivations, look at *me*. Are my motivations
> human?

Yes, your motivations are very human indeed. Like most people, you seem to have an urge to find a higher truth or purpose in life, some kind of objective answer (among other things, like personal glory and procreation). Well, IMHO there's no ultimate answer. It's an illusion, an ever receding horizon. Even the smartest SI can only guess at what's right and true, but it will never ever know for sure. It will never know whether there's a God (etc.) either, unless He reveals himself. And even then, is he *really* THE God? Uncertainty is eternal. The interim meaning of life is all there is, and all there ever will be. Out of all arbitrary options, the search for the highest possible pleasure is the one that makes, by definition, the most sense. It is, together with the auxiliary goals of survival and the drive to increase one's knowledge and sphere of influence, the most logical default option. You don't have to be a Power to figure that one out.

> > Or are you hoping for an insane Superpower? Not something you'd
> > want to be around, I reckon.
> No, *you're* the one who wants an insane SI. Selfishness is insane.

You must be joking...

> Anything is insane unless there's a rational reason for it, and there is
> no rational reason I have ever heard of for an asymmetrical world-model.
> Using asymmetric reflective reasoning, what you would call
> "subjectivity", violates Occam's Razor, the Principle of Mediocrity,
> non-anthropocentrism, and... I don't think I *need* any "and" after that.

Reason dictates survival, simply because pleasure is better than death. If "symetry" or whatever says I should kill myself, it can kiss my ass.

> > Synchronized uploading would create several SIs at once, and though
> > there's a chance that they'd decide to fight eachother for supremacy,
> > it's more likely that they'd settle for some kind of compromize.
> Or that they'd merge.

Yep, but this could still be a compromize (for some).

> I find it hard to believe you can be that reasonable about sacrificing
> all the parts of yourself, and so unreasonable about insisting that the
> end result start out as you. If two computer programs converge to
> exactly the same state, does it really make a difference to you whether
> the one labeled "den Otter" or "Bill Gates" is chosen for the seed?

Yes, it matters because the true "I", the raw consciousness, demands continuity. There is no connection between Mr. Gates and me, so it's of little use to me if he lives on after my death. "I" want to feel the glory of ascension *personally*. That it wouldn't matter to an outside observer is irrelevant from the individual's point of view.

> > > All you can do is
> > > speed up the development of nanotech...relatively speaking. We both
> > > know you can't steer a car by selectively shooting out the tires.
> >
> > No, but you *can* slow it down that way.
> Of course you can. But does it really matter all that much, to either
> of us, whether a given outcome happens in ten years or twenty?

Oh, if you assume that death is the most likely outcome, a decade extra does indeed matter. It's better than nothing.

> What
> matters is the relative probabilities of the outcomes, and trying to
> slow things down may increase the probability of *your* outcome relative
> to *my* outcome, but it also increases the probability of planetary
> destruction relative to *either* outcome... increases it by a lot more.

Compared to a malevolent SI, all other (nano)disasters are peanuts, so it's worth the risk IMO.

> > Theoretically a group of almost any size could do this, more or
> > less SETI-style (but obviously with a good security system in
> > place to prevent someone from ascending on the sly). I'm not
> > excluding anyone, people exclude *themselves* by either not
> > caring or giving in to defeatism, wishfull thinking etc.
> I see. You're going to simultaneously upload six million people?

Though *theoretically* possible, there might be some practical problems with such massive numbers. Anyway, since it's very unlikely that so many people would be interested in (1st wave) uploading, it's hardly relevant.

> And
> then upgrade them in such a way as to maintain synchronization of
> intelligence at all times? Probability: Ze-ro.

Perhaps none-sentient AIs could maintain synchronization to a certain level, but sooner or later it's every upload for himself, obviously, unless merging turns out to be a popular choice. Still, this fighting chance is better than meek surrender to an ASI. It's the morally correct thing to do.

> I think you overestimate the tendency of other people to be morons.
> "Pitch in to help us develop an open-source nanotechnology package, and
> we'll conquer the world, reduce you to serfdom, evolve into gods, and
> crush you like bugs!"

BS, by joining you get an equal chance to upload and become posthuman. If you let the others cheat you, you'll only have yourself to blame. And looking at the world and its history, it's hard to *underestimate* people's tendency to be morons, btw. Damn!

> Even "Help us bring about the end of the world in
> such a way that it means something" has more memetic potential than
> that. There's a *lot* more evolution devoted to avoiding suckerhood
> than lemminghood.

So what? Rational people seek to upload, and will cooperate with other such rational individuals to increase their chances of success. If suckers won't join because they're afraid to get suckered, that's their problem. They'll die like the lemmings they are.

> > What's a "pure" Singularitarian anyway, someone who wants a
> > Singularity asap at almost any cost? Someone who wants a
> > Singularity for its own sake?
> Yep.

(from another thread)
> If this turns out to be true, I hereby award myself the "Be Careful What
> You Wish For Award" for 1996. Actually, make that the "Be Careful What
> You Wish For, You Damned Moron Award",

In case of a AI-driven Singularity, something like the above could make a nice epitaph...

> Humanity *will* sink. That is simply not something
> subject to alteration. Everything must either grow, or die. If we
> embrace the change, we stand the best chance of growing. If not, we
> die. So let's punch those holes and hope we can breathe water.

Ok, but don't forget to grow some gills before you sink the ship...

> Believe me, den Otter, if I were to start dividing the world's
> population into two camps by level of intelligence, we wouldn't be on
> the same side.

No doubt.

> You may be high-percentile but you're still human, not human-plus-affector.

Tough luck, I'm The One.

> And, once again, you are being unrealistic about the way technologies
> develop. High-fidelity (much less identity-fidelity) uploading simply
> isn't possible without a transhuman observer to help.
> Any uploadee is
> a suicide volunteer until there's an SI (whether IA or AI) to help.
> There just isn't any realistic way of becoming the One Power because a
> high-fidelity transition from human to Power requires a Power to help.

Assumptions, assumptions. We'll never know for sure if we don't try. Help from a Power would sure be nice, but since we [humans] can't rely on that, we'll have to do it ourselves. If we can upload a dog, a dolphin and a monkey successfully, we can probably do a human too.

> > Besides, a 90% chance of the AI killing us
> > isn't exactly an appealing situation. Would you get into a
> > machine that kills you 90% of the time, and gives total,
> > unprecedented bliss 10% of the time? The rational thing is
> > to look for something with better odds...
> Yes, but you haven't offered me better odds. You've asked me to accept
> a 1% probability of success instead.

I think that your 1% figure is rather pessimistic. In any case, don't forget that AI researchers like yourself directly and disproportionately influence the odds of AI vs IA. If some of the top names switched sides, you could quite easily make IA the most likely path to ascension. You demand better odds, but at the same time you actively contribute to the discrepancy.

Ultimately it always comes down to one thing: how likely is a nuclear/nano war really within the next 30 years or so, and how much damage would it do. Does this threat justify the all-or-nothing approach of an AI Transcend? Well, there hasn't been a nuclear war since the technology was developed more than 50 years ago, I'd say that's a pretty good precedent. Also, nukes and biological weapons haven't been used by terrorists yet, which is another good precedent. Is this likely to change in the first decades of the next century? If so, why?

Even if we had a full-scale nuclear conflict, this would by no means kill everyone, in fact, most people would probably survive, as would "civilization". A "malevolent" AI would kill *everybody*. Is grey goo really that big a threat? A fully autonomous replicator isn't exactly basic nanotech, so wouldn't it be likely that people would already be starting to move to space (due to the incresingly low costs of spaceship/habitat etc. construction) before actual "grey/black goo" could be developed? And even on earth one could presumably make a stand against goo using defender goo, small nukes and who knows what else. Many would be killed, but humanity probably woudn't be wiped out, far from it. A malevolent AI would kill *everyone*. See my point?

> As far as I can tell, your evaluation of the desirability advantage is
> based solely on your absolute conviction that rationality is equivalent
> to selfishness. I've got three questions for you on that one. First:
> Why is selfishness, an emotion implemented in the limbic system, any
> less arbitrary than honor?

Selfishness may be arbitrary, but it's also practical because it's needed to keep you alive, and being alive is...etc. "Selfishness" is a very sound fundament to build a personality on. Honor often leads to insane forms of altruism which can result in suffering and even death, and is therefore inferior as a meme (assuming that pleasure is better than death and suffering).

> Second: I know how to program altruism into
> an AI; how would you program selfishness?

Make survival the Prime Directive.

> Third: What the hell makes
> you think you know what rationality really is, mortal?

What the hell makes you think you know what rationality really is, Specialist/AI/SI/Hivemind/PSE/Power/God etc., etc.? Oops, I guess it's unknowable. Ah well, I guess I'll try to find eternal bliss then, until (if ever) I come up with something better. But feel free to kill yourself if you disagree.

> And I think an IA Transcend has a definite probability of being less
> desirable than an AI Transcend. Even from a human perspective. In
> fact, I'll go all the way and say that from a completely selfish
> viewpoint, not only would I rather trust an AI than an upload, I'd
> rather trust an AI than *me*.

Well, speak for yourself. Since the AI has to start out as a (no doubt flawed) human creation, I see no reason to trust it more than the guy(s) who programmed it, let alone myself.

> I think you're expressing a faith in the human mind
> that borders on the absurd just because you happen to be human.

It may be messy, but it's all I have so it will have to do.

> > What needs to be done: start a project with as many people as
> > possible to firgure out ways to a) enhance human intelligence
> > with available technology, using anything and everything that's
> > reasonably safe and effective
> *Laugh*. And he says this, of course, to the author of "Algernon's Law:
> A practical guide to intelligence enhancement using modern technology."

Um no, actually this was meant for everyone out there; I know that *you* have your own agenda, and that the chances of you abandoning it are near-zero, but maybe someone else will follow my advice. Every now and then, the supremacy of the AI meme needs to be challenged.

> Which is the *other* problem with steering a car by shooting out the
> tires... Taking potshots at me would do a lot more to cripple IA than
> AI. And, correspondingly, going on the available evidence, IAers will
> tend to devote their lives to AI.

Well, there certainly are a lot of misguided people in that field, no doubt about that. Fortunately there also are plenty of people (in the medical branches, for example) who, often unknowingly, will help to advance the cause of IA, and ultimately uploading.