Re: Singularity-worship

Eliezer Yudkowsky (sentience@pobox.com)
Mon, 02 Dec 1996 19:32:44 -0600


> There is no heaven, there will be no Singularity. We can create a wonderful
> posthuman future where we will be unshackled from many human limits, but it
> will take critical thinking, creativity, and hard work.

Don't accuse *me* of lying idle. I put up the "Algernon's Law" page, "A
practical guide to intelligence enhancement using current technology."
You don't think that counts as "critical thinking, creativity, and hard
work"? I should think that a practical, proven method of IA counts as a
major landmark.

Maybe you guys think that the Singularity will solve all problems so we
don't have to do anything, but that says more about you than it says
about me. I believe that certain classes of problems are
"pre-Singularity" problems, including the nature of consciousness,
intelligence amplification, nanotechnology, artificial intelligence, and
so on. Some of these problems I have chosen to claim as my personal
responsibility, and as you can see, I have already cracked IA
(Algernon's Law) and am working on a second. Other problems are
"post-Singularity": Whether the Singularity is a good thing, whether
the Powers will be ethical, whether corpsicles can be revived, and so
on. These problems need solutions only insofar as it impacts what we do
now. All we need to know for the problems above are "Almost certainly",
"of course", and "consider the alternative."

> >> Zero. With the tech it takes to revive a terminal frostbite victim,
> >> it's as easy to grow the body back as to repair it. Besides, this is
> >> all going to be post-Singularity and all costs at that point are
> >> effectively zero.
That wasn't Singularity-worship, is was an attempt to quash an
unnecessary thread. Like it or not, some bridges aren't going to be
crossed with the mental powers we have now. Reviving cryonics victims
is a job for the Powers (or even merely smarter humans), not for us.

Our job - or more specifically, my job, I, Eliezer "Algernon" Yudkowsky
- is to concentrate on short-term, practical things like prospects for
current IA. To be blunt, my good Lyle and More, what have *you* done
lately?

The only reason I'm on this list and putting up Web pages and generally
popularizing my work instead of doing it is that I'm recovering from a
dose of Prozac, which - as I discovered - switches off my creative
abilities like a bloody light. So I decided to make the best of it, and
so I have: I've put up a Singularity page and an Algernon page and am
now present on the net. When my abilities come back, as they seem to be
doing, I'll probably lurk instead of posting, if I elect to spend the
time wading through the mailing list at all.

What's your excuse?

> The Singularity concept has all the earmarks of an idea that can lead to
> cultishness, and passivity.

I experienced the incomprehensibility of the Beyond in a very personal
way, from both sides, both in writing some unpublished work and in
reading Godel, Escher, Bach. I know damn well I ain't ever gonna match
Hofstadter and that it's futile for me even to try. I'm still
discovering little (and not-so-little) gems I overlooked, years later.
I can appreciate his work, but I have trouble even noticing it because
key cognitive modules are crippled. There are also some things that
shouldn't be done by an ordinary human unless absolutely necessary.
Designing software architectures comes to mind - if the Java VM had had
Code and Method/Field primitive types, and stacks for each type, it
wouldn't need a bytecode verifier, would run faster, and we could write
self-modifying Java code and pass procedure references. Seems obvious
to me...

So the point is:
1) I know - from Hofstadter - that the work of a mere mortal who
happens to be smarter than you are is almost impossible to comprehend.
2) I know - from writing computer architectures - that being smarter in
certain areas causes progress there to proceed at a vastly higher pace.
3) I know - from writing "Algernon's Law" - that this applies as well
to the field of intelligence enhancement.

It follows that there will be a Singularity. I believe that the
Godling's Glossary defines transhuman as "A human staring into a
magnifying glass." This doesn't give a very good flavor of the future,
particularly if you're not used to dealing with people smarter than you
are (such as full humans) and dumber than you are (such as ordinary
mortals). As someone with actual experience of the phenomena involved
with intelligence enhancement - on both sides, as an Algernon - I am
telling you that your conceptions of a merely transhuman future, derived
from staring into a magnifying glass, are far short of the fact. I know
in excruciating detail the sheer blank impossibility of imagining
someone smarter than you are and it is this - not any greater
intelligence - that lends to my magnifying glass a greater
verisimilitude, for what I magnify is not my own abilities but the
experienced comparision between H and >H, or between <H and H.

It is from that, and not from the tradition of the Apocalypse, that I
derive my views on the Singularity. I wouldn't be pulling rank on you
except that 1) I expect to be retiring from this list shortly, and won't
have to face the music and 2) I am sick and tired of supposed Extropians
and sf fans with no appreciation for the Beyond and the alien and the
incomprehensible.

> >OK, it is time for my bi-monthly reaction: I HATE THIS SILLY MEME!
I hate any meme that applies to events later than 2010 and am attempting
to point out the sheer futility - silliness, if you like - of discussing
it.

Just learn to live with it, guys. There's no use in discussing
anything, anything at all, that takes place in a world where there are
transhumans. They'll solve it in seconds. I KNOW. I was on Prozac
when I wrote a program that would simulate Life on my computer. It took
me days and kept crashing. Around a week ago, I pulled it out again and
it took me maybe fifteen minutes to take out all the bugs, simplify the
program so the rules could be modified in seconds, and double the
speed. Now imagine that magnified a few hundred times and you'll
understand how futile it is to be concerned about the problems of
transhumanity.

The things to concentrate on are practical intelligence enhancements,
nanotechnology, faster computers, and artificial intelligence. It's in
that order because I frankly don't think an unenhanced human is going to
be able to design anything as complex as an AI architecture, faster
computers don't do any good if we can't program them, nanotechnology
will turn the Earth into gray goo and IA via Algernons will have a
two-year turnaround time if we do it right and give us the ability to
handle all the other stuff.

Okay, I've faced the Singularity backlash. I can understand why it
looks like religious ecstasy to the rest of you. You've never been
faced with anything smarter than you were so you cannot possibly
appreciate what it is like. Certainly the ravings against
"impracticality" demonstrates that many managed to miss the most
important aspect of the Singularity, that we don't need to think about
what happens afterward, just how to get there. The business about
sitting back and waiting also seems to demonstrate a communications
breakdown. I'm saying that we should make transhumans ASAP and they'll
handle everything else - how do we get from there to inactivity?

Enough talk. As far as I can tell my abilities have more or less
returned, so I need to work on one of the other prerequisites to the
Singularity. This message seems to be a fairly appropriate signoff. I
have work to do and I've spent enough time composing this message.

I suppose I've gained something from discussing things with this
list - the directions to pages on neural plasticity and the list of
advertising rules seem to be the two big ones - but it would have been a
lot more fun if you guys were discussing the practical cognitive
pitfalls of IA, or making technical suggestions for work in AI, instead
of cryo-revival costs.

I shall continue to lurk, at least for a while, and may even post a
couple of replies - particularly to discussion of this message - since
it isn't fair to grab the last word without giving your opponents a
chance to fire back.

In conclusion, remember that I might not be right, but if I say you're
wrong, you're almost certainly wrong. I happen to know for a FACT that
every conception of transhumans I've run across has shared the causal
blind spot that all full humans are born with. Every time I read
Vinge's conception of smarter-than-human I feel like I'm staring right
through the page directly into Vinge's mind; the same holds true for
Flowers's Charlie or B5's Shadows and Vorlons. If our technology holds
out, there's going to be a Singularity. Every argument you can muster
against it is derived from your merely human mind; every quibble is
extrapolation from the simply mortal.

I've looked into the Beyond; I look at it in a magnifying glass; I see
Singularity. I may be as wrong as you are. But the Singularity will be
*at least* what I see, just as it will be *at least* as powerful as your
conception of it. I may be as wrong as you are - but if so, I was too
conservative.

Eliezer S. Yudkowsky.

-- 
         sentience@pobox.com      Eliezer S. Yudkowsky
          http://tezcat.com/~eliezer/singularity.html
           http://tezcat.com/~eliezer/algernon.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I know.