Re: Otter vs. Yudkowsky

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Mar 26 2000 - 18:54:34 MST


"D.den Otter" wrote:
>
> If nano is relatively easy, if it quickly becomes dangerous and
> uncontrollable and if strong AI is much easier than human
> upgrading then your approach does indeed make sense, as a last
> desperate effort of a dying race.

Well, right to begin with, I think this *is* the situation. My
*particular* scenario is not beyond reasonable doubt, but the
impossibility of the perfect-uploading-on-the-first-try scenario *is*.
The *only* way you have a chance of segueing directly from nanotech to
uploading is if you can obtain and hack the nanomeds to attempt BCI
(Brain-Computer Interface) upgrading, followed by uploading of mild
transhumans with enough power to figure out how to upload themselves.
This itself is improbable; neurocommunicative nanotech is considerably
more advanced than that needed for nanowar.

The thing to remember, though, is that upgrading is an inherently
runaway process, and therefore inherently unsynchronized. What actually
happens is that the first transhuman who gets that far immediately sees
the crushing necessity of using vis nanocomputer to write vis own Sysop,
so as not to run the risk of anyone else writing a Sysop first. Even if
your transhumans stay as themselves all the way, the first one to reach
the point of rewriting vis own source code winds up as the One Power.
Even if you had absolutely no competitors, if it's possible to become a
Power and maintain any shred of yourself, then it was possible to write
a Sysop all along. If you do not maintain any shred of your thoughts,
emotions, and initial motivations, then it is *not you* and you are
*dead*, as clearly as if you'd put a bullet through your head.

> However, if the situation
> turns out to be less urgent, then (gradual) uploading is definitely
> the way to go from a "selfish" perspective because, instead
> of being forced to passively await a very uncertain future, one
> can significantly improve one's odds by stepping up one's efforts.
> Initiative & creativity are rewarded, as they should be. You can
> outsmart (or understand and make deals with, for that matter)
> other uploads, but you can't do that with a transhuman AI if
> you're merely human or an upload running on its system. It's
> not just an instinctive, irrational illusion of control, you
> really *have* more control.

I get the feeling that maintaining control is a Ruling Argument for you.
 One of my Ruling Arguments, for example, is the thought that we're all
going to wind up looking like complete idiots for attempting to control
*any* of this; that we do not understand what's Really Going On any more
than Neanderthals sitting around the campfire, and that all our punches
will connect only with air. And moreover, this is a *happy* ending.
One of the thoughts I find most cheering, when it comes to imagining
what happens on that Last Day when I finally do get uploaded, is that I
will *not* be spending the next billion years on an endless round of
simulated physical pleasures. I don't *know* what will happen next. It
really is a step into the true unknown. And that's what keeps it exciting.

Nonetheless, with so much at stake, I can override my own worries about
winding up looking like an idiot, and think about Sysop instructions
*anyway*, just in case we *do* understand what's going on. The prospect
of failing to properly appreciate the mystery, no matter how much I hate
it, does not change the fact that I cannot reasonably state the chance
that we *are* right to be less than 10%. That possibility must
therefore be prepared for, even though there is a 90% probability that I
will wind up looking like an idiot.

When cold, quantitative logic gives unambiguous recommendations, I can
override a Ruling Argument. Your Ruling Argument about loss of control
is not in accordance with the simple, quantitative logic dictating the
best probability of survival. You yourself have laid down the rules for
what is, to you, the rational action. It is the action that maximizes
your probability of survival with pleasurable experiences. Maintaining
control is a *subgoal* of that necessity. It is illogical to optimize
all actions for maintaining control if that involves *defiance* of the supergoal.

Subgoals are not physical laws. They are heuristics. They have
exceptions. In exceptional cases, it is necessary to calculate the
supergoal directly. To do otherwise is to abandon intelligence in favor
of blind inertia.

> > Or in other words, I'm sure den Otter would agree that 70% is a
> > reasonable upper bound on our chance of success given our current
> > knowledge (although I'm sure he thinks it's too optimistic).
>
> Well yes, ok, I do think it's somewhat optimistic, but, strictly
> speaking, the 10-70% range is reasonable enough. Needless
> to say, when you look at the average we're dealing with some
> pretty bad odds here. I mean, if those were your odds to
> survive a particular operation, you'd probably want your cryonics
> organization to do a [rescue team] standby at the hospital.
> Unfortunately, in the Singularity Game there are no backups
> and no second chances.

I quite agree. So what? That 10%-70% is the best available chance
which is affected by my actions. That's all I care about.

> > [...] I do expect that all
> > reasoning applicable to basic goals will have been identified and
> > produced within a fairly small amount of time, with any remaining
> > revision taking place within the sixth decimal place.
>
> Well, I hope you're right. The biggest risk is probably, as you
> already suggested, at the very beginning; that's when human error
> (i.e. bad programming/bad hardware/some freak mishap)
> could thoroughly mess up the AI's mental structure, making
> it utterly unpredictable and potentially very dangerous. Or
> just useless, of course. This possibility should by no means
> be underestimated.

I agree. Do not underestimate the possibility of bad programming, bad
hardware, and some freak mishap combining to mess up a *human* in the
process of *uploading* to new hardware and then *upgrading* to new
levels of intelligence. Leaving aside your Ruling Argument, the most
rational course of action might be to write a Sysop and have *it*
upgrade *you*, solely on the grounds that undertaking the journey
yourself gives you a less than 10%-70% chance.

I've never been an upload, of course, but I'm being a neurohack, writing
"Coding a Transhuman AI", and trying to figure out how to write a goal
system has given me some insight as to what happens when an evolved mind
starts to acquire fine-grained self-awareness. You (den Otter) are, of
course, allowed to tell me to shut up, but I think that if you went
through simply what *I've* been through as the result of being brilliant
and introspecting with the aid of cognitive science, your personality
would wind up looking like something that went through a shredder. If
it happened gradually, you'd build up new stuff to replace the old. If
it happened fast - if more than one thing happened simultaneously -
there'd be a serious risk of insanity or an unknown circular-logic
effect embedding itself. In either case, I'm really don't know what
your motivations would be like afterwards.

A slow transition, if events occurred in the right order, would have a
fairly high chance of giving rise to another Yudkowsky *when* you were
at *this point* on the curve - I don't know what happens afterwards. If
you had the ability to observe and manipulate your emotions completely
before you started work on upgrading intelligence, as seems likely, then
I really don't know what would happen. I can predict the human
emotional sequiturs of increasing intelligence up to my own level, but
not if the intelligence itself is choosing them.

> [Sysop]
> > Don't think of it as an enemy; think of it as an Operating System.
>
> Operating Systems can be your enemy too; as a Win95 user
> I know what I'm talking about...

You're personifying again. Think of a Sysop as an archetypal OS, one
that does what you tell it to, invisibly and transparently unless you
ask it to behave otherwise.

> Yes, but what if the other sentients *aren't* part of the
> simulation (alien life forms from distant galaxies, uploads
> or AIs that have ascended more or less simultaneously but
> independently). By "surroundings" I meant the "real world"
> outside the SI technosphere, not the (semi-autonomous)
> simulations it is running. I agree that containing the latter
> shouldn't be too much of a problem, but that's not the
> issue here.

Why is it safer to be on the front lines yourself than it is to let the
Sysop do it, aside from the emotional anxiety-binding you're using as a
Ruling Argument? Two Powers in the same situation are presumably going
to come up with the same plans of attack or defense; your actual chance
of survival is completely unaltered.

> > I should correct my terminology; I should say that observer-*biased*
> > goals are simply evolutionary artifacts. Even if only
> > observer-dependent goals are possible, this doesn't rule out the
> > possibility of creating a Sysop with observer-unbiased goals.
>
> That's a lot better. I still think that a totally "selfless"
> being is a lower form of life no matter how smart it
> is otherwise (a giant leap backwards on the ladder of
> evolution, really), but perhaps this is just an "irrational"
> matter of personal taste. In any case, the prudent
> approach is to keep your ego until you're certain, beyond
> all reasonable doubt, that it is not an asset but a handicap
> (or useless appendage). In other words, you'll have to
> become a SI first.

Unless you don't start out with an ego. And I should note, by the way,
that the emotional bindings backing up that "beyond all reasonable
doubt" might be one of the first things to dissolve if you started
upgrading; instead of "anxiety about making a mistake" being a Ruling
Argument, you might just quietly multiply probabilities together. One
of the universals of my own journey has been that the more you practice
self-alteration, the less anxious you become about altering other things.

> Is it your aim to "encapsulate" the whole known universe with
> everything in it? Regardless of your answer, if there *are* other
> sentients in this universe, you will eventually have to deal
> with them. Either you come to them -swallowing galaxies
> as you go along- or they come to you. Then you'll have to
> deal with them, not in Java-land, but in reality-as-we-know-it
> where the harsh rules of evolution (may) still apply.

Okay, but if it's possible for a Solar-originating Power to either win
or manage to hold off the enemy, we're safe inside the Sysop; it is
nonthinkable for the behaviors necessary on the *outside* to alter the
behaviors executed on the *inside*. A Sysop is not an evolved mind. It
does not have *habits*.

> No, it wouldn't need the pleasure/pain mechanism for
> self-modification, but that doesn't mean that the pure
> emotion "pleasure" will become redundant; if your machine
> seeks the "meaning of life", it might very well re-discover
> this particular emotion. If it's inherently logical, it *will*
> eventually happen, or so you say.

I think of objective morality as being more analogous to a discoverable
physical substance, although admittedly the formalization is as an
inevitable chain of logic. Anyway, this truly strikes me as being
spectacularly unlikely. Pleasure has no logical value; it's simply a
highly inefficient and error-prone way to implement a system. It is
nonthinkable for a better architecture to deliberately cause itself to
transition to a pleasure-based architecture; the pleasure-based
architecture might approve, but it isn't *there* yet; the *previous*
system has to approve, and there is no rational, supergoal-serving
reason for it to do so.

> > In particular, your logic implies that the *real* supergoal is
> > get-success-feedback,
>
> Forget success-feedback, we're talking about an "untangled"
> emotion for its own sake.

I don't get it. What is pleasure, if not success feedback? I know what
pleasure does in the human brain. It's not magic. An ever-mounting
number stored inside a LISP atom labeled "pleasure" is no more
"pleasure" than the system clock unless it has side effects. What are
the side effects? And which parts define "pleasure"? Why would a
protective, supergoal-based Sysop decide that it served those goals to
recenter itself around the sole goal of increasing some number labeled
"pleasure"? Heck, why would you?

> The supergoal would be "to have
> fun", and the best way to do this is to have a separate
> module for this, and let "lower" autonomous systems sort
> out the rest. The power would be happy & gay all the time,
> no matter what happened, without being cognitively impaired
> as ecstatic humans tend to be.

Why doesn't the Sysop worry that the autonomous system would take over,
crush the Sysop, and institute its own pleasure-having module? Why
can't you view the Sysop as the autonomous system that is a part of
yourself? Why couldn't the Sysop view itself as a part of you, to the
extent that it was necessary to "view itself" as anything at all? All
your instincts that have anything to do with dividing the Universe into
"agents" are, in a fundamental sense, arbitrary.

> If you don't allow an entitity to experience the emotion
> "pleasure", you may have robbed it of something "inherently"
> good. Not because emotions are needed for functioning
> and expanding, but because pure, un-attached, freely
> controllable pleasure is a *bonus*. You have 3 basic
> levels: -1 is "suffering", 0 is the the absence of emotions,
> good or bad (death is one such state) and 1 is
> "pleasure". Why would a fully freethinking SI want
> to remain on level 0 when it can move, without sacrificing
> anything, to level 1, which is "good" by defintion
> (I think I'm happy, therefore I *am* happy or something
> like that).

It is *not* good by the only definition that matters - the Sysop's.
System A does not transition to System B because System B thinks it's a
good idea, but because System A thinks it's a good idea.

> Think of it as a SI doing crack, but without all
> the nasty side-effects. Would there be any logical reason
> _not_ to "do drugs", i.e. bliss out, if it didn't impair your
> overall functioning in any way?

But it would. If blissing out is anything for *us* to worry about, than
it's something for a protective Sysop to worry about. In any case, what
exactly *is* pleasure, on the source-code level, and what distinguishes
it from, say, modeling the three-body problem? Why is a Power
intrinsically more likely to devote more space to one than to the other?

> Bottom line: it doesn't
> matter whether you include pleasure in the original
> design; if your machine seeks the ultimate good and
> has the ability to modify itself accordingly, it may
> introduce a pleasure module at some point because
> it concludes that nothing else makes (more) sense.
> I think it's very likely, you may think the opposite, but
> that the possibility exists is beyond any reasonable
> doubt.

If the goal system is improperly designed and fragile, there is the
possibility of that happening. There is also the possibility that no
matter *how* I design a goal system, human or otherwise, it will come to
view itself on a level where the indicator is the supergoal, then
short-circuit. The first possibility is Something I Control, and while
I don't know that I could sit down and write a non-fragile goal system,
I do think that once we can write the prehuman architecture we'll
understand what to put in it. From an engineering standpoint, I am not
intrinsically more worried about nonfragile goals than about getting the
seed AI and the subsequent Power to do nonfragile physics or nonfragile mathematics.

In the later event, we're talking about the possibility that all Powers
do something that I would regard as sterile and essentially insane. If
so, then we're all doomed to wind up dead or sterile, and there's very
little I can do about it.

> Yes, IMO everything is arbitrary in the end, but not everything
> is equally arbitrary in the context of our current situation. If
> you strip everything away you're left with a pleasure-pain
> mechanism, the carrot and the stick.

Au contraire. Carrots and sticks are one of the first things that get
stripped away.

If you strip *everything* away, you're left with the system's behavior,
i.e. the list of choices that it makes. On a higher level, you can
hopefully view those choices as tending to maximize or minimize the
presence or absence of various labels which describe the physical
Universe; these labels are the behavioral supergoals. If the choices
are made by a process which is internally consistent in certain ways,
and the process declaratively represents those labels, then those labels
are cognitive supergoals. To some extent regarding a physical system as
a cognitive process is an observer-dependent decision, but it makes
sense as an engineering assumption.

>From a crystalline standpoint, all you need for that basic assumption is
a representation of external reality, a set of supergoals (as a function
acting on representations), a set of available choices, and a model of
those choices' effects within the representation, and a piece of code
which actually makes choices by comparing the degree of supergoal
fulfillment in the represented outcomes.

The human system, if it were a lot more crystalline than it is, would be
regarded as implementing the code via an indirection, which we call a
pleasure-and-pain system. Pleasure and pain are determined by supergoal
fulfillment, and choices are made on the basis of pleasure and pain;
therefore, the possibility exists for the system to short-circuit by
changing the conditions for supergoal fulfillment. But pleasure and
pain are not an intrinsic part of the system.

Because a pleasure-and-pain system can do anything a direct-supergoal
system can do, it will always be possible to anthropomorphize a
direct-supergoal system as "doing what makes it happy", but the code for
a pleasure-and-pain system will be physically absent, along with the
chance of short-circuiting.

> From our current pov,
> it makes sense to enhance the carrot and to get rid of the
> stick altogether (while untangling the reward system from
> our survival mechanism and making the latter more or
> less "autonomous", for obvious reasons). AIs could still
> be useful as semi-independent brain modules that take
> care of system while the "I" is doing its bliss routine.
> "Mindless" Sysops that still have to answer *fully* to
> the superintelligent "I", whose line of consciousness can
> be traced back directly to one or several human uploads.

And why aren't these "Mindless" Sysops subject to all the what-iffery
you unloaded on my living peace treaties? If you can safely build a
Sysop which is fully responsible to you, you can build a Sysop which is
a safe protector of humanity. Who the Sysop is responsible to is not a
basic architectural variable and does not affect whether or not it works.

All you're really pointing out is that, if you're a Power, you might be
able to compartmentalize your mind. Doesn't really affect the choices
from our perspective, except to point up the basic subjectivity of what
constitutes a "compartment".

> > Well, you see "objective morality" as a romantic, floating label. I see
> > it as a finite and specifiable problem which, given true knowledge of
> > the ultimate laws of physics, can be immediately labeled as either
> > "existent" or "nonexistent" within the permissible system space.
>
> You're still driven by "arbitrary" emotions, attaching value to
> random items (well, not completely random in an evolutionary
> context; seeking "perfection" can be good for survival). At the
> very least you should recognize that your desire to get rid of "all
> human suffering" is just an emotional, "evolved" monkey
> hangup (the whole altruism thing, of which this is clearly
> a part, is just another survival strategy. Nothing more, nothing
> less). But that's ok, we're all still merely human after all.

It is not a survival strategy. It is an adaptation. Humans are not
fitness-maximizers. We are adaptation-executors. All I care about is
whether those adaptations serve my current purpose. I don't care what
their "purpose" was in the ancestral environment, except insofar as
knowledge of evolutionary psychology helps me maintain control of my own
mind. I do not care if a pocket calculator grew on a tree and evolved
arithmetical ability in response to selection pressures, as long as it
gives the right answers.

Yes, part of my altruism-for-fellow-humans is evolved; although there's
a genuine "what if pleasure qualia are objectively moral" train of
thought in there, in the event that there's no objective purpose to the
Universe, altruism is what I'd use for a goal.

Any objective purpose would take precedence over the altruism, because
the objective purpose is real and the altruism is only a shadow of
evolution (unless the objective purpose verifies the altruism, of
course). The same would hold for you, whenever you became smart enough
for the pressure of logic to become irresistable.

> You said yourself that your Sysop would have power-like
> abilities, and could reprogram itself completely if so desired.
> I mean, an entity that can't even free itself from its original
> programming can hardly be called a Power, can it? Perhaps
> you could more or less guarantee the reliability (as a
> defender of mankind) of a "dumb" Sysop (probably too
> difficult to complete in time, if at all possible), but not
> that of truly intelligent system which just happens to
> have been given some initial goals. Even humans can
> change their supergoals "just like that", let alone SIs.

But why would the Sysop do so? There has to be a reason. You wouldn't
hesitate to designate as worthlessly "improbable" the scenario in which
the Sysop reprograms itself as a pepperoni pizza.

> And how would the search for ultimate truth fit into
> this picture, anyway? Does "the truth" have a lower,
> equal or higher priority than protecting humans?

The truth is what the Sysop *is*. A Mind is a reflection of the truth
contained in a cognitive representation. If that truth dictates
actions, it dictates actions. If that truth does not dictate actions,
then the actions are dictated by a separate system of supergoals. If
supergoals can be "true" or "false", and they are false, then they
disappear from the mirror.

> Unless it encounters outside competition. It may not have to
> compete for mates, but resources and safety issues can be
> expected to remain significant even for Powers.

Evolution requires population. From our perspective, all that's
significant is the Power we start out in, which either lives or dies.

> Also, even
> if we assume no outside competition ("empty skies"), it
> still could make a *really* bad judgement call while upgrading
> itself.

So could you. So could I. Of the three, I'll trust the seed AI. The
seed AI is designed to tolerate change. We are not. I would regard any
other judgement call as suspect, probably biased by that evolved-mind
observer-biased tendency to trust oneself, a function of selection
pressures in ancestral politics and ignorance of brain-damage case histories.

Do you know that damage to the right side of the brain can result in
anosognosia, in which left-side appendage(s) (an arm, a leg, both) are
paralyzed, but the patient doesn't know it? They'll rationalize away
their inability to move it. Try to tell them otherwise, and they'll
became angry. It could happen to *you*. Do you really trust yourself
to survive uploading and upgrading without running haywire?

> > Even so, your chances are still only one in a thousand, tops - 0.1%, as
> > I said before.
>
> Again, only in the worst-case "highlander" scenario and only
> if the upload group would really be that big (it would probably
> be a lot smaller).

The smaller the group, the less likely you are to obtain the necessary
nanotechnology and take a shot at it. I regard the Highlander Scenario
as the default, since it only requires *one* attack-beats-defense or
runaway-feedback-desynchronizes situation at *any* point along the
self-enhancement curve, and my current engineering knowledge leads me to
think that *both* are likely to hold at *all* points along the curve.
The uploading scenario is far, far, far more tenuous than the Sysop
scenario. The number of non-default assumptions necessary for the
project to be executed, for the utility to be significant, and for there
to be more than one survivor places it into the category of non-useful probabilities.

> > Not at all; my point is that AI is a gamble with a {10%..70%} chance of
> > getting 10^47 particles to compute with, while uploading is a gamble
> > with a {0.0000001%..0.1%} of getting 10^56. If you count in the rest of
> > the galaxy, 10^58 particles vs. 10^67.
>
> Think bigger.

Okay. Knuthillions vs. knuthillions.

> Anyway, this is not (just) about getting as much
> particles as possible for your computations ("greed"), but rather
> about threat control ("survival").

I agree, except that I'd delete the "just". The greed
utility-differentials are small enough to be irrelevant.

> There is an "unknown" chance
> that the Sysop will turn against you (for whatever reason), in
> which case you're at a horrible disadvantage, much more so
> than in the case of a MAD failure and subsequent battle between
> more or less equally developed Powers. I'd rather fight a
> thousand peers than one massively superior Power anytime.

If you have to fight, you lose. Period.

> Uploads basically have two chances to survive: 1) You make
> a deal, most likely based on MAD, and no-one fights, everyone
> lives.

As an end-state, this is physically plausible; i.e. either defense beats
attack, or any unstoppable attack can be detected in time to launch an
unstoppable retaliation. I wouldn't really argue with a "10%-70%"
probability of stability for the physical situation you describe,
although I'd make it more like "20%-40%". The problem is maintaining
perfect synchronization until Power level is attained - without invoking
a Sysop, of course. How do you get from your basement laboratory to a
group of independent Powers running on independent hardware? Without
allowing runaway self-enhancement or first-strike capability on anyone's
part at any point along the curve?

Even if that happens, the social and technological tenuousities involved
with postulating uploading on your part (*before* Sysop achievement on
mine) are more than enough to place your scenario beyond the realm of
the possible. And if you managed to deal even with *that*, you'd still
have to deal with the philosophical problems of "destructive upgrading",
which should, by your declared rational rules, decrease the perceived
utility-increment of your scenario (from the den Otter perspective) over
the Sysop scenario (from the den Otter perspective).

It's not just one objection, it's *all* the objections.

> 2) If this fails, you still have a fighting 0.1% chance
> even in the worst case scenario, i.e. when everyone fights to
> the death

This chance is trivial. 0.1%, nonconsequential, as stated. It should
not factor into rational calculations.

> (*which isn't likely*; SIs ain't stupid so they're
> more likely to compromize than to fight a battle with such bad
> odds).

Only if compromise is enforceable by counterstrikes, or if treaty is
enforceable by Hofstadterian superrationality. I think the second
scenario is nonthinkable, and the first scenario is dependent on physics.

> Therefore, I have to conclude that an upload's
> chance would be considerably better than your 0.1%
> figure. 10-70%, i.e. Sysop range, would be a lot more
> realistic, IMO. SI may not like competition, but they
> are no *retards*. I'd be surprised indeed if they just
> started bashing each other like a bunch of cavemen.
> If MAD can even work for tribes of highly irrational
> monkeys, it sure as hell should work for highly rational
> Powers.

Again, I'll okay the possible physical plausibility of your end-state,
but I really don't think you can get there from here.

> > What can you, as a cognitive designer, do with a design for a group of
> > minds that you cannot do with a design for a single mind? I think the
> > very concept that this constitutes any sort of significant innovation,
> > that it contributes materially to complexity in any way whatsover, is
> > evolved-mind anthropomorphism in fee simple.
>
> Just plain old multiple redundancy. Seemed like a good idea
> when there's so much at stake and humans are doing the
> initial programming.

If you're gonna do multiple redundancy, do it with multiple versions of
internal modules in a single individual. Don't do it with multiple
individuals all using the same architecture. That's just asking for trouble.

> > As I recall, you thought
> > approximately the same thing, back when you, I, and Nick Bostrum were
> > tearing apart Anders Sandberg's idea that an optimized design for a
> > Power could involve humanlike subprocesses.
>
> Ah, those were the days. Was that before or after we smashed
> Max More's idea that a SI would need others to interact with
> (for economical or social reasons?

Yep, those were the days.

> Anyway, I still agree that
> messy human subprocesses should be kept out of a SI's
> mental frame, of course. No disagreement here. But what
> exactly has this got to do with multiple redundancy for
> Sysops?

I'm just saying that the concept that a committee is more reliable than
an individual is anthropomorphic, having to do with the idea that
competing areas of willful blindness will sum to overall
trustworthiness. If you're gonna have redundancy, it seems to me that
one should sprinkle it through multiple hiearchical levels.

Anyway, redundancy is a programmer's issue, and not really germane.

> > > > I *would* just forget about the Singularity, if it was necessary.
> > >
> > > Necessary for what?
> >
> > Serving the ultimate good.
>
> Oh yes, of course. I suppose this is what some call "God"...

Yes, den Otter, and some people call the laws of physics "God". We both
know better than to care. Right?

> > No. I'm allowing the doubting, this-doesn't-make-sense part of my mind
> > total freedom over every part of myself and my motivations; selfishness,
> > altruism, and all. I'm not altruistic because my parents told me to be,
> > because I'm under the sway of some meme, or because I'm the puppet of my
> > romantic emotions; I'm altruistic because of a sort of absolute
> > self-cynicism under which selfishness makes even less sense than
> > altruism. Or at least that's how I'd explain things to a cynic.
>
> I've done some serious doubting myself, but extreme (self)
> cynicism invariably lead to nihilism, not altruism.

?? How so? There's no logical, rational sequitur from "life is
meaningless" to "go on a shooting spree". It's just a habit of thought
that comes from using "life is meaningful" to justify "I will not go on
a shooting spree"; it's just as easy to organize your thoughts so that
"I will not go on a shooting spree" is justified by "that would be even
*more* pointless".

> Altruism
> is just one of many survival strategies for "selfish" genes,
> *clearly* just a means to an evolved end. Altruism is a sub-
> goal if there ever was one, and one that becomes utterly
> redundant when a self-sufficient entity (a SI) gets into a
> position where it can safely eliminate all competition.

Adaptation-executers, not fitness-maximizers. Evolution is, in a sense,
subjective. There's no true mind behind it. What do I care what
evolution's "purpose" was in giving me the ability to see that 2 and 2
make 4? I'm here now, making the adaptations serve *my* purpose.

Of course altruism is a subgoal-from-an-engineering-standpoint of
evolution. So is selfishness. So is pleasure, and pain, and rational
thought. The substance of our mind can put those building blocks
together however we like, or favor the ones which are least arbitrary.
Why would an SI that eliminated altruism not also eliminate the tendency
to eliminate competition? One is as arbitrary as the other.

> Altruism is *compromize*. A behaviour born out of necessity
> on a planet with weak, mortal, interdependent, evolved
> creatures. To use it out of context, i.e. when it has
> become redundant, is at least as arbitrary as the preference
> for some particular flavor of ice-cream. At the very least,
> it is just as arbitrary as selfishness (its original "master").

This is *precisely* where you are wrong. Altruism and selfishness are
independent adaptations, subgoals of the relentlessly selfish *gene*,
which does not give the vaguest damn how long you you survive as long as
you have a lot of grandchildren. Yes, some types of altruism serve
selfish purposes, just as selfishness, in a capitalist economy, can
serve the altruistic purposes of providing goods and services at low prices.

> So, unless you just flipped a coin to determine your guideline
> in life (and a likely guideline for SIs), what exactly *is* the
> logic behind altruism? Why on earth would it hold in a SI world,
> assuming that SIs can really think for themselves and aren't
> just blindly executing some original program?

Are we talking about objective morality, or altruism? Altruism is, at
the core, inertia, unless it turns out to be validated by objective
morality. Objective morality would be a physical truth, which would
show up in any mirror that reflects external reality, any mind that
represents the Universe.

> > Anxiety! Circular logic! If you just let *go*, you'll find that your
> > mind continues to function, except that you don't have to rationalize
> > falsehoods for fear of what will happen if you let yourself see the
> > truth. Your mind will go on as before, just a little cleaner.
>
> A bit of untangling is fine, but getting rid of the "I" is not
> acceptable until I have (much) more data. Such is the prudent
> approach...Upload, expand, reconsider.

den Otter, letting go of this particular irrational prejudice is not a
philosophical move. It is the *minimum* required to execute the
*reasoning* you must use *now* in order to *survive*.

==

> Also, one would expect
> that he, subconsciously or not, expects to -eventually- be
> rewarded in one way or another for his good behaviour.
> The latter certainly isn't unlikely in *his* reality...

I doubt it.
One: Adaptation-executers, not fitness-maximizers.
Two: Have you *seen* all the episodes this season? Including the one
where Buffy showed up?

-- 
       sentience@pobox.com      Eliezer S. Yudkowsky
          http://pobox.com/~sentience/beyond.html
                 Member, Extropy Institute
           Senior Associate, Foresight Institute



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:06:34 MDT