Re: Otter vs. Yudkowsky

From: D.den Otter (neosapient@geocities.com)
Date: Sat Mar 18 2000 - 15:00:19 MST


----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>

> "D.den Otter" wrote:

> > People are duly getting worried, and
> > statements like "there's a significant chance that AIs
> > will indeed wipe us out, but hey, that's cool as long as
> > they find the meaning of life" aren't likely to calm them
> > down.
>
> Actually, the statement I use is "there's a significant chance that AIs
> will indeed wipe us out, but we can't avoid confronting that chance
> sooner or later, and the sooner we confront it the less chance we have
> of being wiped out by something else".

> Our point of dispute is whether uploading presents any real difference
> to personal-survival-probability than AI.

Yes, that's the *practical* side of the dispute. There's also the
philosophical issue of whether personal survival is more important
than the creation of superintelligent successors, "egoism" vs
"altruism" etc., of course. This inevitably adds an element of
bias to the above debate.

> From my perspective, humans
> are simply a special case of AIs, and synchronization is physically
> implausible. If uploading presents a difference from AI, therefore, it
> can only do so for one (1) human. If you, den Otter, are part of a
> group of a thousand individuals, your chance of deriving any benefit
> from this scenario is not more than 0.1%.

If there can be only one, then you're right. Still, it's better to
have 0.1% control over a situation than 0%. Now add to
this the following:

> > Ok, you say something like 30% vs 0,1%, but how exactly
> > did you get these figures? Is there a particular passage in
> > _Coding..._ or _Plan to.._ that deals with this issue?
>
> 30% I pulled out of thin air. 0.1% is a consequence of symmetry and the
> fact that only one unique chance for success exists.

Out of thin air. Ok, so it could be really anything, including
some value <0.1%. At least with uploading you know more
or less what you can expect.
 
> > "Objective" hours? That could be thousands if not millions
> > of subjective years from the Sysop's pov. That's plenty of
> > time to mutate beyond recognition.
>
> My point is that there is not an infinite number of chances for things
> to go wrong.

The longer you exist, the more opportunities there will be for
something to go wrong. That's pretty much a mathematical
certainty, afaik.

> If it doesn't decide to off you after a couple of hours,
> you don't need to spend the rest of eternity living in fear.

I don't think so; what's a couple of hours compared to
the billions of years that lie ahead. It's just a baby whose
journey to maturity has just begun. Who knows how
many Singularities will follow. The AI could "malfunction"
or discover the meaning of life right away, or it could take
aeons. There's just no way of knowing that.
 
> > Damn right. This could be the only chance we'll ever get to
> > become truly "free" (as far as mental structures allow), and
> > it would be a shame to waste it.
>
> My point is that existing inside a Sysop wouldn't "feel" any less free
> than being a Lone Power, unless your heart's desire is to take actions
> that would infringe on the rights of others. Certainly, it would be no
> less free - with freedom measured in terms of available actions - than
> existing in a Universe where other Powers had their own physically
> inviolable areas of control,

Those are two different kinds of "freedom". In the former case
you can go everywhere, but you carry your own partial prison
around in your head (like the guy from _Clockwork Orange_),
while in the latter case you may not be able to go anywhere
you want, but you *are* master of your own domain. I think
I prefer the latter, not only because it is more "dignified"
(silly human concept), but because it's wise to put as much
distance and defences between you and a potential
enemy as possible. When that enemy is sharing your
body, you have a real problem.

> and it would be a great deal more free than
> the Universe you'd live in if a non-Sysop gained the upper hand.

Probably...Initially, at least.
 
> > So the Sysop is God. Ok, let's hope Murphy's Law doesn't
> > apply *too* much to coding AIs...
>
> Intelligence is the force that opposes Murphy's Law.

Usually with lots of trial and error...

> An intelligent,
> self-improving AI should teach itself to be above programmer error and
> even hardware malfunction.

Yes, *if* you can get the basics right.
 
> > the line, which would certainly explain the apparent lack of
> > SI activity in the known universe. Bottom line: there are
> > absolutely no guarantees. A dynamic, evolving system
> > can't be trusted by defintion.
>
> Said the dynamic, evolving system to the other dynamic, evolving system...

I don't claim that I (or humans in general) can be trusted,
do I? I know that humans can't be trusted, but neiter can
evolving AIs, so there is no particular reason to favor them
as Sysops for mankind.
 
> > > It has nothing to do with who, or what, the SIs are. Their
> > > "right" is not a matter of social dominance due to superior
> > > formidability, but a form of reasoning that both you or I would
> > > inevitably agree with if we were only smart enough.
> >
> > Yes, *we* could "inevitably" agree that it makes perfect sense
> > to disassemble all lower life forms.
>
> You're anthropomorphizing - social hierarchies are human-evolutionary artifacts.

Natural evolution may have made some pretty bad mistakes, but
that doesn't necessarily mean that *all* of our programming will become
obsolete. If the SIs want to do something, they will have to stay
alive to do it (unless of course they decide to kill themselves, but
let's assume for the sake of argument that this won't be the case).
Basic logic. So some sort of self-preservation "instinct" will be
required(*) to keep the forces of entropy at bay. Survival requires
control --the more the better-- over one's surroundings. Other
intelligent entities represent by definition an area of deminished
control, and must be studied and then placed in a threat/benefit
hierarchy which will help to determine future actions. And voila,
your basic social hierachy is born. The "big happy egoless
cosmic family model" only works when the other sentients
are either evolutionary dead-ends which are "guaranteed" to
remain insignificant, or completely and permanently like-minded.

(*)this is one of those things that I expect to evolve "spontaneously"
in a truly intelligent system that has some sort of motivation. A system
without goals will just sit on its ass for all eternity, IMO).
 
> I'm not talking about an observer-dependent process. I mean that if you
> and I were Minds and an Objective third Mind decided we needed killing,
> we would agree and commit suicide. Again, there is absolutely no
> a-priori reason to expect that goals would converge to some kind of
> observer-dependent set of answers. If goals converge at all, I'd expect
> them to behave like all other known "convergent mental structures which
> do not depend on initial opinions", which we usually call "facts".
>
> Any other scenario may be a theoretical possibility due to
> "inscrutability" but it's as likely as convergence to chickenhood.

No, no, no! It's exactly the other way around; goals are
observer dependent by default. As far as we know this is
the only way they *can* be. Is there any evidence to the
contrary? Not afaik. It is the whole objective morality idea
that is wild speculation, out there in highly-unlikely-land
with the convergence to chickenhood & the tooth fairy.
 
> > Fat comfort to *them* that the
> > Almighty have decided so in their infinite wisdom. I wouldn't
> > be much surprised if one of the "eternal truths" turns out to
> > be "might makes right".
>
> *I* would. "Might makes right" is an evolutionary premise which I
> understand in its evolutionary context. To find it in a Mind would be
> as surprising as discovering an inevitable taste for chocolate.

Evolution represents, among other things, some basic rules
for survival. No matter how smart the SIs will become, they'll
still have to play by the rules of this reality to live & prosper.
You can't deny self-evident truths like "might makes right"
without paying the price (decreased efficiency, possibly
serious damage or even annihilation) at some point. And
yes, I also belief that suicide is fundamentally stupid,
*especially* for a Power which could always alter its mind
and bliss out forever if there's nothing better to do. The only
logical excuse for killing yourself is when one knows for pretty
damn sure, beyond all reasonable doubt, that the alternative
is permanent, or "indefinite", hideous suffering. Now how
likely is *that*?
 
> > I don't know, it's rather difficult to imagine an untangled "I".
>
> So do I, but I can take a shot at it!

But you don't, more or less by defintition, really know what
you're doing!
 
> > This is basically an alien form of life...
>
> Yes, it is!

Yes indeed; we are inviting aliens to our planet. Not
just any aliens, mind you, but *superintelligent* ones.
Can't help feeling that this is a very big mistake. Is
this just irrational monkey paranoia, or the voice of
reason itself? Well, I guess we're about to find out...
 
> > How can the AI be sure that it has reached this mystical
> > plateau of true objective meaning? Because it "just knows"?
>
> When it reaches the point where any objective morality that exists would
> probably have been discovered; i.e. when the lack of that discovery
> counts as input to the Bayesian Probability Theorem. When continuing to
> act on the possibility would be transparently stupid even to you and I,
> we might expect it to be transparently stupid to the AI.

In other words, objective morality will always be just an educated
guess. Will there be a limit to evolution anyway? One would be
inclined to say "yes, of course", but if this isn't the case, then
the quest for objective morality will go on forever.
 
> What you call a "point-of-view" is an evolutionary artifact.

A rather useful one too. The creatures with the strongest
point-of-view (i.e. humans) dominate the planet utterly, and
are the only ones that have developed culture and technology.
One would logically expect our successors to have *more*
"pov", not less.

> I can see
> it from where I stand, a great tangled mess of instincts and intuitions
> and cached experience about observer-dependent models of the Universe,
> observer-dependent utility functions, the likelihood that others will
> "cheat" or tend to calculate allegedly group-based utility functions in
> ways that enhance their own benefit, and so on.

Needs a bit (ok, a lot) of tweaking, perhaps, but the basic
components for functioning are all there. It works! No idle
speculation, but a system that's been approved by this
reality.
 
> I can also see that it won't exist in an AI. An AI *does not have* a
> pov, and your whole don't-trust-the-AIs scenario rests on that basic
> anxiety, that the AI will make decisions biased toward itself. There is
> just no reason for that to happen. An AI doesn't have a pov. It's
> every bit as likely to make decisions biased towards you, or towards the
> "viewpoint" of some quark somewhere, as it is to make decisions biased
> toward itself. The tendency that exists in humans is the product of
> evolutionary selection and nothing else.

Evolutionary selection for *survival*, yes. And, once again,
you really, really need to be alive to do something. Whether
those actions are altruistic or egoistic in nature is besides
the point.
 
> Having a pov is natural to evolved organisms, not to minds in general.

I'm sure you could make some pov-less freak in the lab, and
keep it alive under "ideal", sterile conditions, but I doubt that
it would be very effective in the real world. As I see it, we have
two options: a) either the mind really has no "self" and no "bias"
when it comes to motivation, in which case it will probably just
sit there and do nothing, or b) it *does* have a "self", or creates
one as a logical result of some pre-programmed goal(s), in
which case it is likely to eventually become completely
"selfish" due to a logical line of reasoning.

[snakes & rodents compared to AIs & humans]
> It would be very much different. Both snakes and rodents evolved.
> Humans may have evolved, but AIs haven't.

But they will have to evolve in order to become SIs.

> Now, would *you* - right here, right now - rather have the certainty of
> getting one six-billionth the mass of the Solar System, or would you
> prefer a one-in-six-billion chance of getting it all?

I'd take the former, of course, but that's because the odds in this
particular example are extremely (and quite unrealistically so)
bad. In reality, it's not you vs the rest of humanity, but you vs
a relative small financial/technological elite, many (most) of
whom don't even fully grasp the potential of the machines they're
working on. Most people will simply never know what hit them.

Anyway, there are no certainties. AI is not a "sure shot", but
just another blind gamble, so the whole analogy sort of
misses the point.
 
> > > You need a peace treaty. You need a system, a process, which ensures
> > > your safety.
> >
> > MAD. Not a very stable system, perhaps, but neither are
> > superintelligent, evolving Sysops.
>
> Who said anything about a plural? One Sysop.

Ok, one then. Doesn't change the argument much though;
it's still an unstable solution. In fact, a system with
multple independent Sysops could arguably be safer, providing
checks & balances like, for example, the multiple main
computers in passenger jets or the Space Shuttle. Only
those commands which receive a majority "vote" are carried
out. Works pretty well, afaik. So if you're gonna make
an AI (Sysop or otherwise) anyway, better make several
at once (say, 3-10, depending on the costs etc.).

> One Power to enforce The
> Rules that keep all the other Powers from playing cops-and-robbers with
> the Solar System. That's the point.

Power corrupts, and absolute power...this may not apply just
to humans. Better have an Assembly of Independent Powers.
Perhaps the first thing they'd do is try to assassinate eachother,
that would be pretty funny.

> By the time non-self-improving AI becomes possible, self-improving AI
> will have advanced well past the point where I can build one in a basement.
>
> Given enough intelligence to be useful, the AI would have enough
> intelligence to independently see the necessity of improving itself further.
>
> I wouldn't trust an unPowered Sysop to upload me, anyway.

Perhaps you're overestimating the difficulty of uploading;
the (destructive) scan method may require some kind of intelligent
AI, but more gradual procedures involving neural implants and
the like should be quite feasible for smart humans with "dumb"
but powerful computers. It's just fancy neurosurgery, really,
which could be perfected in simulations and with various
animals (and eventually some humans).

> > To ask of me to just forget about continuity is like asking
> > you to just forget about the Singularity.
>
> I *would* just forget about the Singularity, if it was necessary.

Necessary for what?

> Getting to my current philosophical position required that I let go of a
> number of irrational prejudices, some of which I was very much attached
> to. I let go of my irrational prejudice

Irrational by what standards -- those imposed by another randomly
picked irrational prejudice?

> in favor of the
> Singularity-outcome,

In what context (form your pov, not mine) is the Singularity
rational, and why isn't this motivation just another irrational
prejudice that just happens to have strongly linked itself to
your pleasure centers? Aren't you really just rationalizing
an essentially "irrational" choice (supergoal) like the rest of
humanity?

> and admitted the possibility of nanowar. I let go
> of my irrational prejudice in favor of a dramatic Singularity, one that
> would let me play the hero, in favor of a fast, quiet seed AI.
>
> If it's an irrational prejudice, then let it go.

Then you'd have to stop thinking altogether, I'm afraid.
 
> > > > P.s: do you watch _Angel_ too?
> > > Of course.
> > Ah yes, same here. Nice altruistic chap, isn't he?
>
> And very realistically so, actually. If Angel's will is strong enough
> to balance and overcome the built-in predatory instincts of a vampire,
> then he's rather unlikely to succumb to lesser moral compromises. He's
> also a couple of hundred years old, which is another force operating to
> move him to the extrema of whatever position he takes.

Sure, but what's really interesting is that his conscience
(aka "soul") is a *curse*, specifically designed to make him
suffer (*). The implicit(?) message is that altruism isn't an
asset, but a handicap. The enlightened self-interest-based ethics
of "normal" vampires certainly make a lot more sense, rationally
speaking; they usually have just one major rule, something along
the lines of "do what you want as long as you don't harm your
own kind". Straightforward, logical, honest, robust & efficient.
Compared to this elegant setup, altruism-based ethics (which
glorify "terminal" self-sacrifice etc.) are just fragile bubbles,
ready to be popped by the sharp needle of reality.
 
(*)Not exactly fair; it was the demon Angelus and not Angel himself
who did all the killing etc. They are, afaik, two different persons
who happen to use the same body. Those Gypsies were/are torturing
an innocent man.

> Gotta develop an instinct for those non-gaussians if you're gonna think
> about Minds...

I'm pretty sure I've got the basics right, but of course so are you.



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:45 MDT