Re: Principle of Nonsuppression

den Otter (neosapient@geocities.com)
Thu, 02 Sep 1999 19:41:20 +0200



> From: Eliezer S. Yudkowsky <sentience@pobox.com>

> > > 1) We're so busy trying to sabotage each other's efforts that we all
> > > wind up getting eaten by goo / insane Powers.
> >
> > Let's hope we can somehow prevent that...
>
> Ooh! Yeah! Great plan!

See, we have a little problem: sabotaging eachother's efforts would likely get everyone killed, but so would letting you do your AI thing. Damned if you do, damned if you don't. Oh dear.

> > > 2) Iraq gets nanotechnology instead of the US / the AI project has to
> > > be run in secret and is not subject to public supervision and error correction.
> >
> > Note that "curbing" AI (or any other dangerous technology) by no means
> > has to involve government-imposed bans. There are other (better) ways to
> > do this.
>
> Such as...

Dissuade individual scientists from going ahead with dangerous research. Many of those involved in building the bomb regretted their actions afterwards, afaik. With AI, there may be no "afterwards", so the people working in this field should be made to understand what they're dealing with. Same goes, though to a lesser extent, for nanotech. Of course, *some* people (like the suicidal nihilists or hopelessly naive technophiles) will be immune to even the harshest psychological warfare, and might require more drastic measures (though actually killing them may not be necessary. That would be rather barbaric and inhumane, after all. No, a blast from the Zombie Gun (TM) will do just fine. Or...no, that's a surprise).

> > No, scaring the
> > public and the government would more likely result in a tightening of
> > project security, which is quite good because it would buy us some time.
>
> Buy *who* some time?

Everyone. Mankind.

> > > "Trying to suppress a dangerous technology only inflicts more damage."
> > > (Yudkowsky's Threats #2.)
> >
> > How defeatist. I'd say that suppressing the proliferation of nukes, for
> > example, was a *great* idea. Otherwise we probably wouldn't be here
> > right now. Stupid as they may be, big governments do offer fairly good
> > stability, on average.
>
> Yes, nuclear weapons are an interesting case. I should say that trying
> to suppress the *creation* of a technology - research and development -
> only inflicts more damage. I'm fully in favor of suppressing the
> *proliferation* of dangerous technology.

The best way to prevent proliferation (nanotech and AI are *much* harder to contain than nukes) is to hold back the technology for as long as possible.

But, contrary to that you may think, I'm not all that interested in "stopping" any technology, not even AI. I'd rather just keep up with developments, and stimulate the "good" technologies. The problem is, that doing (just) the latter may simply not be good enough. We'll see...

> Once Zyvex has nanotechnology,
> I'd be fully in favor of their immediately conquering the world to
> prevent anyone else from getting it.

<bitter sarcasm> Excellent idea! </bitter sarcasm>

> That's what should've been done
> with nuclear weapons.

No, that would have been a bad idea, IMHO. Most likely outcome: nuclear holocaust, as the US wasn't *that* much ahead of the USSR, and their supply of bombs was depleted after the barbeque in Japan. The Soviets, duly outraged, would have fought the US with the same vigor they displayed against the Nazis, and inflict horrible damage. Their massive, battle-hardened armies could easily hold back Allied (or maybe not so allied) forces until they had their own nukes. Apart from this, the American (and European) public probably wouldn't have tolerated an attack on the Soviets, then still officially allies, directly after the end of WW2.

Apart from that, I wouldn't trust the US with absolute power. They may be *relatively* "good", but unchecked they'd probably develop into a very nasty dictatorship. And of course, there would still be plenty of conflicts (Vietnam on a global scale), as the occupied world would be united in its hate against the aggressors. Sooner or later, they'd start nuking local uprisings. There's your brave new world.

> If the "good guys" refuse to conquer the world
> each time a powerful new weapon is developed, sooner or later a bad guy
> is going to get to the crux point first.

"Power corrupts, and absolute power corrupts absolutely" and "the road to hell is paved with good intentions". What kind of dictatorship do you propose anyway? Destroying the world is relatively easy, but setting up a stable empire is a rather different ballgame.

> Alas, I don't think Zyvex's
> resources will suffice for the "matter programming" needed.

No, unless Zyvex use the nanotech to build an AI, they wouldn't stand much of a chance. And of course if that were the case, it would be the AI, and not Zyvex, that would "conquer the world".

> > So what? Laws can be broken, twisted, evaded. Like we were waiting for
> > the government's blessing in the first place.
>
> Okay, now I don't get it. Are you under the impression you'll find it
> easier to evade nanotechnology laws than I'll find it to evade AI laws?

As I've pointed out before, I'm not counting on governments and their laws to stop dangerous research.

> > The writings are mine, obviously. Anyone who agrees with the principles
> > can call himself a "Transtopian". And yes, there are actually
> > like-minded people out there, strangely enough. Of course, as this is
> > the fringe of a fringe movement, you can't expect it to be very big.
>
> An... interesting... perspective

I see. So, have you found any millionaires to fund your project yet?