Re: IA vs. AI was: longevity vs singularity

Eliezer S. Yudkowsky (sentience@pobox.com)
Mon, 26 Jul 1999 00:05:46 -0500

den Otter wrote:
>
> > A suggestion bordering on the absurd. Uploading becomes possible at
> > 2040 CRNS. It becomes available to the average person at 2060 CRNS.
> > Transhuman AI becomes possible at 2020 CRNS. Nanotechnology becomes
> > possible at 2015 CRNS.
>
> -there's still a gap of 5 years between nanotech and AI, ample time
> to wipe out civilization if nanotech is as dangerous as you seem to
> assume.

Yes, I've noticed that. I stay up at night worrying about it.

> -I'd have to stop nanowar, assuming I'd want to do that, for 25 and
> not 45 years (because the Singularity comes immediately after
> the first wave of uploads. Anyone who's serious about survival,
> let alone ascension, must be in the first wave. Needless to say,
> be sure to get rich). Consequently, I'd have to hold back AI for
> 20 years, not 40. Still very hard, but feasible with the proper
> support.

Has it occurred to you that if the first uploads *are* charitably inclined, then it's *much* safer to be in the second wave? The first uploads are likely to be more in the nature of suicide volunteers, especially when you consider that a rough, destructive, but adequate scan is likely to come before a perfect, nondestructive scan.

You're staking an awful lot on the selfishness of superintelligences. Maybe you don't have the faintest speck of charity in your soul, but if uploading and upgrading inevitably wipes out enough of your personality that anyone would stop being cooperative - well, does it really make that much of a difference who this new intelligence "started out" as? It's not you. I know that you might identify with a selfish SI, but my point is that if SIs are *inevitably* selfish, if *anyone* would converge to selfishness, that probably involves enough of a personality change in other departments that even you wouldn't call it you.

> -Stopping you from writing an AI wouldn't be all that hard, if I really
> wanted to. ;-)

Sure. One bullet, no more Specialist. Except that that just means it takes a few more years. You can't stop it forever. All you can do is speed up the development of nanotech...relatively speaking. We both know you can't steer a car by selectively shooting out the tires.

> -If nanotech is advanced enough to destroy the world, it can
> surely also be used to move to space and live there long enough
> to transcend.

More sophisticated nanotech, and it's nanotech in your own personal hands. I don't need personal nanotech to die, but I need it to run. Realistically, it's obvious that the probabilities are not in my favor. Unless I'm so caught up in wishful thinking that I think I can get, and retain exclusive control of, nanotechnology AND uploading AND AI AND intelligence enhancement... but why continue?

> You can run and/or hide hide from nanotech, even
> fight it successfully, but you can't do that with a superhuman
> AI, i.e. nanotech leaves some room for error, while AI doesn't (or
> much less in any case). As I've said before, intelligence is the
> ultimate weapon, infinitely more dangerous than stupid nanites.

Quite. And an inescapable one. See, what *you* want is unrealistic because you want yourself to be the first one to upload, which excludes you from cooperation with more than a small group and limits your ability to rely on things like open-source projects and charitable foundations. What *they* want is unrealistic because they want to freeze progress.

Both of you are imposing all kinds of extra constraints. You're always going to be at a competitive disadvantage relative to a pure Singularitarian or the classic "reckless researcher", who doesn't demand that the AI be loaded down with coercions, or that nanotechnology not be unleashed until it can be used for space travel, or that nobody uploads until everyone can do it simultaneously, or that nobody has access to the project except eight people, and so on ad nauseam. The open-source free-willed AI project is going to be twenty million miles ahead while you're still dotting your "i"s and crossing your "t"s.

> -Synchronized Singularity *is* feasible for a limited number of people
> (some of which may choose to uplift the rest later). Yes, it's extremely
> hard, but not impossible. More importantly, it's the only real option
> someone who values his existence has. The AI should only be used
> after all else has failed and the world is going to hell in a handbasket.

I don't understand your calculations.

A-priori chance that you, personally, can be in the first 6 people to upload: 1e-9.
Extremely optimistic chance: 1%
Extremely pessimistic chance that AIs are benevolent: 10%

Therefore it's 10 times better to concentrate on AI.

> > after all, I've openly declared that
> > my first allegiance is not to humanity.
>
> No, it should be to yourself, of course. Anyway, so you're willing to
> kill everyone on earth, including yourself, to achieve...what, exactly?

Sorry, I'm not on the need-to-know list for that information.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way