Re: The Meaning of Life

Eliezer Yudkowsky (
Sat, 08 Feb 1997 20:28:49 -0600

[Eric Watt Forste:]
> It's a little bit sad that you can't tell the difference between a human
> being and a 386 programmed to spew out random sentences.

Well? What is it? I'm waiting? I realize that a human being is more
complex than a 386, but what difference does it make? A 386 can
maximize about as well as a human, even if it has less resources to do
it with.

Again. Why does something acting to maximize X make it valuable, and
why is X valuable if a human acts to maximize it but not if a 386 acts
to maximize it?

I've stated that there's no ethical difference 'tween a human maximizing
something and a thermostat following the same strategy. Respond, remain
silent, or claim I'm "obviously wrong", refuse to back it up, and change
your name to Searle.

> Assertions prove nothing, Eliezer. How would you like to go about
> demonstrating to us the arguments behind your conviction that value
> is independent of valuers? If this is your axiom, then you are
> simply assuming what it is that you are setting out to prove: that
> life has no value.

I'm setting out to prove no such thing. Nor does it follow from
observer-independent value that life is valueless. My claim is that
observer-dependent value in the direct, pleasure-center sense requires a
definition of "value" which is not intuitively compatible with what
*anyone* means, including all participants in this dicussion, by asking
whether life is worth living. Thus making observer-dependent values no
values at all.

> Perhaps it's the extra fuss that makes the difference? A simple
> computer can declare "I will resist if you turn me off." You turn
> it off and nothing happens. I, on the other hand, can declare "I
> will resist if you try to take away my raincoat." and you will find
> it a considerably harder task to take away my raincoat than to turn
> off the simple computer.

Now you're citing something with a vague bit of justification: You
state that *meaningful* maximization requires *resistance* to
de-maximizing forces.

Someday, computer security systems may put up quite a fight if you try
to turn them off over the 'Net. This is not the place to retype GURPS
Cyberpunk, but possible actions range from severing connections to
repairing damaged programs to rewriting entrance architectures to impose
new password-challenges. I could write, right now, an application that
resisted death in a dozen ways, from trying to copy itself elsewhere to
disabling the commonly used off-switch. If someone can't resist, do
they lose their right to live? This is the "fetuses/computers have no
rights because they are dependent on us" absurdity all over again. If I
make you dependent on me, even for blood transfusions, do you lose your
rights as a sentient being?

There are dozens of far-from-sentient classical AIs that will attempt to
achieve goals using combinations of actions. From where I sit, I see a
folder called "shrdlu" which contains a computer program that will
attempt to stack up blocks and uses chained actions to that effect.
Does this program have any ethical relevance? Within its world, do
stacked blocks have meaning?

Don't give me that "sad that you can't tell the difference" copout.
Tell me what the fundamental difference is, and why "maximizing X" is
not sufficient to make X valuable, and how humans maximizing X includes
the key ingredient.

> Okay, so you're a nihilist who can't tell the difference between human
> beings and themostats. This is hardly a new philosophy. People have been
> messing around with nihilism extensively for the last hundred years, and
> it goes back even farther than that. But nihilism is not really a
> philosophy; nihilism is a disorder of the affects.

Nope, I'm saying *your* philosophy leads to nihilism. I'm saying your
philosophy of value has not *told* me what the difference between humans
and thermostats is. If you're going to post condescending remarks, you
should read enough of the thread to know who to condescend to! Also,
YOUR philosophy is a disorder of the affects TOO, so nyah nyah nyah nyah

> There are many things in the world which cannot be maximized by
> anything simpler than intelligence, consciousness, brains, thinking,
> etc. There are many things in the world which cannot be maximized
> by thermostats.

Name as many as you want, because I can PROVE that a giant
hashtable-thermostat can maximize anything a computational mind can.
I.e. REALLY BIIG (but less than 3^^^3) lookup table, duplicates inputs
and outputs, no mind, but works as well. Again, what difference does it
make what does the maximizing?

> The Meaning of Life is staring you in the face. It's right in front
> of your eyes. If you don't feel it right now, get outside. Find a
> dandelion meadow or a beautiful man or woman or a symphony orchestra
> or something. Try reading some William Blake, who teaches us how
> to see the world in a grain of sand.

Why is tickling the pleasure centers significant? You're simply ducking
the question. Evolution makes a lot of things *pleasurable*, this
causes us to assign them value, but I want to know if things *really*
have value that isn't just a trick of our genes! If our genes made us
think that sadism was inherently valuable, would it be? You've named
things that we think of as not merely pleasurable, but *holy* - but John
K Clark would laugh at quite a few "holy" values.

Now, it is possible that pleasure has inherent value. David Pearce
thinks so. He could even be right; subjective conscious pleasure could
be inherently valuable. In which case the logical course would be
wireheading on the greatest possible scale, using only the Power
necessary to "turn on" the Universe and then devoting all those useless
thinking circuits to pleasure-center emulation. And yet, some deep part
of us says: "This isn't meaningful; this is a dead end."

I've been at peace with the world around me, while listening to Bach, if
at few other times. It moves me. I admit it. But I still have the
moral courage to defy my own sense of outrage and say, "So what?"
Simply because I assign value to a thing does not necessarily make it
valuable! Why is assigning value to Bach better than assigning value to
cats? And if the latter is wrong, why not the first as well?

Your intuitive definition of "value" is "All pleasurable activities that
I have not categorized as 'dead ends'." My response: "So what?

> Eliezer, it's quite foolish to pretend that determinism means that our
> behavior is determined by our genes. If you had been born with precisely
> your present genes, and immediately put up for adoption and raised in
> Korea, you would be quite a different person than you are now.

Maybe... but unlike most people, I can't be sure which of my
philosophies are genetic and which environmental. As an Algernon, I
might have wound up adopting close to exactly the same philosophical
worldview if raised in Korea, although perhaps a bit later (no
computers, no Internet).

As I write this, I can think of two (possible) Algernons who seem almost
eerily identical - not in knowledge, not in skills, probably not in life
history... but in writing style and outlook on life, *indistinguishable*
to me, and probably any normal human looking on. Similarly, the only
other known Countersphexist happened upon my Algernon site while looking
up "Singularity" on Alta Vista. And I'd bet money that a certain other
happened on "Algernon's Law" while looking up "intelligence

So it could be true that I'd be a lot different if brought up in Korea.
But the evidence available to me suggests that innate ability levels do
an awful lot to determine what philosophy you choose, perhaps to the
point of simple genetic determinism.

--       Eliezer S. Yudkowsky

Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.