Re: Singularity: Individual, Borg, Death?

Eliezer S. Yudkowsky (
Wed, 02 Dec 1998 12:38:30 -0600

Paul Hughes wrote:
> "Eliezer S. Yudkowsky" wrote:
> For starters, if I had to come down to choosing logic vs. survival, I'd choose my
> survival every time. I can think of at least three life-threatening situations
> where my gut instinct saved my life over all other logical objections.

One of the primary lessons I learned the first time I took the SAT was to trust my instincts and not second-guess myself. One should not confuse logic and intelligence. Perhaps I have. Certainly my argument was "merely" rational for quite some time before I successfully reduced it to pure logic. I try not to suffer from the "delusion of form". I don't insist that the real world is logical. My logic is sufficiently honed that it incorporates my informational intuitions; there is no conflict involved, just a system functioning cohesively to produce correct answers. I don't allow myself to be directed by wishful logic. Even my root supergoal, the Singularity, is produced by a tight logic chain of minimal length, and I defy anyone to produce a shorter chain of logic without assuming what they're trying to prove.

Why do I need a root supergoal produced by pure logic? I don't trust the built-in human goal system. Yeah, we can all think of one or two examples where it worked the way it was supposed to. Wheee. The rest of the day, however, we generally spend fighting someone else's emotions when we aren't fighting our own. This is a legacy system from the days of hunter-gatherer tribes. Furthermore, it doesn't serve our goals. It serves evolution's goals. You may think of yourself as being very cagey and selfish by making survival your first priority, but you're just embracing a set of genetic puppet strings. The whole thing is arbitrary.

I happen to think that our cognitively produced goals have objectively real analogues, in the same way that our visual field usually corresponds to physical objects. Choosing the wrong goal is like having a hallucination, or perhaps like believing that the sun and moon are the same size. There are real answers, and I want the real answers, not evolution's trick questions. And if there aren't real answers, then mine are as good as anyone's.

> Ok, so your acting on logic. If I understand you correctly, all of the things that
> makes life enjoyable (fun) are irrelevant because logically our only outcomes are
> singularity or oblivion?

No, fun is not irrelevant in the sense of having no causal influence on the world. It is, however, irrelevant to _my_ logic chain because "fun" is an arbitrary set of evolutionary instructions. It's irrelevant to _your_ logic chain because, in the storm ahead, you are fighting for survival, not for fun. It's probably irrelevant to the _correct_ logic chain, but I have no way of being sure.

> Either we face the inevitable or accept extinction?

No, I expect that most people will go about either in complete ignorance of the Singularity, or secure in the delusion that the future is endless nanotechnological fun. Then some AI project will succeed, and that'll be it.

> > Now, I happen to think that even from humanity's perspective, a rapid
> > Singularity is the best way to go, because I don't see a feasible alternative.
> Can you concisely explain why a a non-rapid path to the Singularity is unfeasible?

As technological levels increase, the resources required for a Singularity or for destroying the world go down. A single person equipped with nanotechnology can do either. There are excellent motives for both the good guys and the bad guys to cheat. Even concentrating all resources into the hands of a group (via preemptive nanotechnology) - a group small enough to contain no cheaters - wouldn't work, because the tools required would need a number of researchers sufficient that one of them would cheat.

Delaying the rate of technological increase does not decrease the absolute number of technological risks; and, given that we already have the capability to blow up the world, simply increases that probability; and uses up resources, thus increasing global tensions.

> I'll agree that if a non-rapid path to a friendly singularity is not possible, then
> the logical thing for anyone who cares about their continued existence is to embrace
> a rapid Singularity.

This is in fact the case, as best I understand it.

> Can you accept the possibility that this higher intelligence could be internally
> generated rather than externally dictated? Assuming its possible, which would you
> rather have?

Doesn't make a difference to me. What matters is that the intelligence exist. After that, I've handed over the responsibility.

> My allegiance is to myself first, that part of humanity I care about the most, then
> the Singularity. Like you Elizier I'm seduced by what the Singularity portends and
> want to achieve and immerse myself in higher intelligence, but only as a
> transformation of my ego rather than a deletion of it.

I really don't think that any action we can take on this level can dictate the internal organization post-Singularity. Likewise, my intelligence is insufficient to deduce the result. All that remains is to open up the box and find out if we live, or die rather than look.

--         Eliezer S. Yudkowsky

Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.