Re: IA vs. AI was: longevity vs singularity

den Otter (neosapient@geocities.com)
Sun, 25 Jul 1999 01:08:03 +0200



> From: Eliezer S. Yudkowsky <sentience@pobox.com>
> > On Thu, 22 July 1999, "den Otter" wrote:
> >
> > > Eliezer seems to
> > > favor the AI approach (create a seed superintelligence and
> > > hope that it will be benevolent towards humans instead of
> > > using our atoms for something more useful), which is
> > > IMHO reckless to the point of being suicidal.
>
> Funny, those are almost exactly the words I used to describe trying to
> stop or slow down the Singularity.

That's hardly surprising; both options are extremely risky. Still, when it comes to destructive potential, the Singularity wins hands down so we really shouldn't be so eager to cause one. It's like using a nuke against a riot (sort of).

> > > A much better, though still far from ideal, way would be
> > > to focus on human uploading, and when the technology
> > > is operational upload everyone involved in the project
> > > simultaneously.
>
> A suggestion bordering on the absurd. Uploading becomes possible at
> 2040 CRNS. It becomes available to the average person at 2060 CRNS.
> Transhuman AI becomes possible at 2020 CRNS. Nanotechnology becomes
> possible at 2015 CRNS.

>
> If you can stop all war in the world and succeed in completely
> eliminating drug use, then maybe I'll believe you when you assert that
> you can stop nanowar for 45 years, prevent me from writing an AI for 40,
> and stop dictators (or, for that matter, everyone on this list) from
> uploading themselves for 20. Synchronized Singularity simply isn't feasible.

-there's still a gap of 5 years between nanotech and AI, ample time to wipe out civilization if nanotech is as dangerous as you seem to assume.

-I'd have to stop nanowar, assuming I'd want to do that, for 25 and not 45 years (because the Singularity comes immediately after the first wave of uploads. Anyone who's serious about survival, let alone ascension, must be in the first wave. Needless to say, be sure to get rich). Consequently, I'd have to hold back AI for 20 years, not 40. Still very hard, but feasible with the proper support.

-Stopping you from writing an AI wouldn't be all that hard, if I really wanted to. ;-)

-If nanotech is advanced enough to destroy the world, it can surely also be used to move to space and live there long enough to transcend. You can run and/or hide hide from nanotech, even fight it successfully, but you can't do that with a superhuman AI, i.e. nanotech leaves some room for error, while AI doesn't (or much less in any case). As I've said before, intelligence is the ultimate weapon, infinitely more dangerous than stupid nanites.

-Synchronized Singularity *is* feasible for a limited number of people (some of which may choose to uplift the rest later). Yes, it's extremely hard, but not impossible. More importantly, it's the only real option someone who values his existence has. The AI should only be used after all else has failed and the world is going to hell in a handbasket.

> after all, I've openly declared that
> my first allegiance is not to humanity.

No, it should be to yourself, of course. Anyway, so you're willing to kill everyone on earth, including yourself, to achieve...what, exactly?