Re: One for the history books

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Aug 26 2001 - 17:49:55 MDT


Lee Corbin wrote:
>
> Eliezer writes
>
> > Getting a place in the history books isn't a good reason to do
> > something.
>
> Of course. But it would seem almost unbelievably difficult for
> anyone to get truly beyond that need. My own best theory as to
> why a number of famous people who admit that cryonics would work,
> e.g. Issac Asimov, never bothered to get themselves frozen, is
> that they had begun to live for their fame. I am seriously
> doubtful of denials of the appeal of fame and fortune; to some
> degree they remind me of the calls to altruism made by failed
> economic systems. Perhaps it's only deeply unconsciously, but
> I don't think that there is anyone who can stand up and claim
> that no *comparitively mundane* aren't also a factor in the
> working of his or her mind. We just have to learn to live with
> this, and, indeed make the best use of it.

I have made the decision to go for one hundred point zero zero percent
altruism and complete rationality. The decision is not synonymous with
the achievement, but is a necessary precondition of that achievement. To
make a deliberate compromise is to halt your progress at that point,
because even the best you can achieve and the most ambitious form of
sanity you can imagine is only another step on a long road where only the
next single step is visible at any given time. There are a variety of
emotions that could act as sources of mental energy for my Singularity
work, the desire for fame among them. The excuse is there if I choose to
use it. I do not so choose. I choose to relinquish those emotions rather
than compromise rationality.

I made that choice at around age fifteen or sixteen, shortly after I
became aware of evolutionary psychology and the likelihood that the
emotions designed to underly altruism would not be consistent with the
declared goals of altruism. Because emotions like the desire for fame are
fairly clear-cut - are "exceptional conditions" within the event-loop of
the mind - it is possible to learn to identify the emotion's subjective
feel, notice it, and disbelieve the mental imagery that causes it. I've
since moved on to more interesting areas of mental cleanup. As far as
things like the desire for fame go, I am finished. If you were to
identify the brain-level hardware support for fameseeking and insert a
little alarm that went off whenever the brainware activated, I think the
activity level would be essentially zero; if the alarm ever went off, it
would be a small, choked cough as the emotion was triggered, I noticed the
subjective feel, and the emotion was subsequently wiped out of existence.
(I am still validating my desire for routine social respect, which is
quite a different thing from a desire for general fame - although I am
recently starting to believe that this emotion may be "considered harmful"
as well.)

In short, I'm now confident enough about my mental cleanup that I can talk
about it without making nervous little disclaimers such as "But, of
course, who really knows what's inside their mind?" As far as the
specific cognitive force "desire for fame" is concerned, I predict that my
plans will exhibit no more sign of it than plans made by an AI. I will
not make excuses in advance for failures because I am done cleaning up
that emotion and I do not expect there to be any failures.

I realize that this is a claim for such an extraordinarily high level of
ability that Bayesian reasoners, reasoning from the prior expected
population levels of self-overestimators and extremely sane people, may
find that such a claim (considered as an unadorned abstract) is more
reason to doubt sanity than to believe it. That's probably the reason why
a lot of people who are interested in the cognitive science of rationality
manage to sound self-deprecating when talking about it; not just as a
signal to others, I think, but as a signal to themselves, because they
*know* that high confidence in sanity is often a signal of insanity. But
that in itself is unsanity. It's saying, "I observe that I'm damned good
at being sane, but I won't admit it even to myself, because if I sent
myself the signal of confidence in sanity, I might have to interpret that
signal as evidence of insanity."

And yes, for some years, I gave myself the runaround this way. It's an
emergent flaw in the way that the human mind processes pleasureable and
painful anticipations, and not a single emotion that can be identified and
dealt with. I'm also sad to admit that my giving myself the runaround was
eventually crushed under the buildup of experience, rather than noticed
via pure abstract logic. I only started working with the general problem
of pleasurable and painful anticipations while writing "Creating Friendly
AI", and am actually still working on cleaning that general class of
emergent problems out of my own mind.

> I never feel any the worse about someone when I come to have
> reason to think that either fame or fortune is a factor in their
> behavior. The only thing that annoys me is when they're so
> transparent about it that it comes accross as petty and blind,
> and seems to bespeak an inability to see the situation from
> others' points of view.

Fame, as a motive to do good, is certainly preferable to the various
motives to do things that aren't good. So, as noxious emotions go,
fameseeking is fairly far down on my list of emotions to target in
others. The main reason for me to be concerned about observed fameseeking
is if I see it in someone who I think would like to be a rational
altruist, and who's already gotten fairly far in cleaning up the mental
landscape. Even so, fameseeking acts as an approximation to rational
altruism under some circumstances, and I am thus willing to use this
emotion as memetic shorthand - provided that the arguments are truthful,
the flaws in the approximation to altruistic rationality are either unused
or explicitly dealt with, and pure rationality is closer than under the
previous status quo. But there's no way that you could use the argument
from post-Singularity fame, because the underlying statement is probably
not truthful (do transhumans care?) and also because Singularitarians are
generally supposed to *be* conscious rationalists and the use of emotional
shorthand may lower the standard.

On the other hand, it's acceptable to say: "You are one of only six
billion entities, in a galaxy of four hundred billion stars and decillions
of sentient beings, whose lives predate the Singularity; you are one of
the oldest of all living things. I don't know whether you or anyone else
will respect that in the future, because it's difficult to predict what
transhumans will care about, but it does say something about how we should
feel *now*. We are not just the six billion people who were around before
the Singularity; we are the six billion people who created the
Singularity. An incomprehensibly huge future rests on the consequences of
our present-day actions, and 'the impact we have on the universe' is
essentially 'the impact we have on the Singularity'."

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:18 MDT