Re: Otter vs. Yudkowsky

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Mar 15 2000 - 12:44:11 MST


"D.den Otter" wrote:
>
> ----------
> > From: Eliezer S. Yudkowsky <sentience@pobox.com>
>
> [snip agreement]
> > The points in dispute are these:
> >
> > 1) From a strictly selfish perspective, does the likely utility of
> > attempting to upload yourself outweigh the utility of designing a Sysop
> > Mind? Sub-disputes include (2) whether it's practically possible to
> > develop perfect uploading before China initiates a nanowar or Eliezer
> > runs a seed AI; (3) whether the fact that humans can be trusted no more
> > than AIs will force your group to adopt a Sysop Mind approach in any
> > case;
>
> Yes, this pretty much sums it up.
>
> (4) whether telling others that the Transtopians are going to
> > upload and then erase the rest of humanity
>
> This is optional. Also, this is not specifically a "transtopian" thing;
> if it's the most logical course of action, *all* ascending Minds
> would erase the rest of humanity. Wiping out humanity for
> its own sake is hardly interesting, and can be done more
> conveniently in simulations anyway, if so desired.

I accept the correction.

> > will generate opposition
> > making it impossible for you to gain access to uploading prerequisite
> > technologies.
>
> Opposition in the form of government bans should be no
> problem for a dedicated group. Besides, look who's
> talking; your pages are bound to scare the living crap
> out of many people, and not just the fanatical luddites
> (see KPJ's recent post regarding the Bill Joy poll at
> http://cgi.zdnet.com/zdpoll/question.html?pollid=17054&action=a),
> for example). People are duly getting worried, and
> statements like "there's a significant chance that AIs
> will indeed wipe us out, but hey, that's cool as long as
> they find the meaning of life" aren't likely to calm them
> down.

Actually, the statement I use is "there's a significant chance that AIs
will indeed wipe us out, but we can't avoid confronting that chance
sooner or later, and the sooner we confront it the less chance we have
of being wiped out by something else".

Our point of dispute is whether uploading presents any real difference
to personal-survival-probability than AI. From my perspective, humans
are simply a special case of AIs, and synchronization is physically
implausible. If uploading presents a difference from AI, therefore, it
can only do so for one (1) human. If you, den Otter, are part of a
group of a thousand individuals, your chance of deriving any benefit
from this scenario is not more than 0.1%.

> > > Well, it's certainly better than nothing, but the fact remains that
> > > the Sysop mind could, at any time and for any reason, decide
> >
> > If it doesn't happen in the first few hours, you're safe forever.
>
> "Objective" hours? That could be thousands if not millions
> of subjective years from the Sysop's pov. That's plenty of
> time to mutate beyond recognition.

My point is that there is not an infinite number of chances for things
to go wrong. If it doesn't decide to off you after a couple of hours,
you don't need to spend the rest of eternity living in fear.

> Damn right. This could be the only chance we'll ever get to
> become truly "free" (as far as mental structures allow), and
> it would be a shame to waste it.

My point is that existing inside a Sysop wouldn't "feel" any less free
than being a Lone Power, unless your heart's desire is to take actions
that would infringe on the rights of others. Certainly, it would be no
less free - with freedom measured in terms of available actions - than
existing in a Universe where other Powers had their own physically
inviolable areas of control, and it would be a great deal more free than
the Universe you'd live in if a non-Sysop gained the upper hand.

> Handing over control to AIs may initially offer an advantage, i.e.
> a greater chance of surviving the early phases of the Singularity,
> but may very well be a long-term terminal error. Uploading
> with the help of "dumb" computers on the other hand may
> increase your initial risks, but *if* you make it you've made
> it good. The key issue is: how big are these chances *really*?
> Ok, you say something like 30% vs 0,1%, but how exactly
> did you get these figures? Is there a particular passage in
> _Coding..._ or _Plan to.._ that deals with this issue?

30% I pulled out of thin air. 0.1% is a consequence of symmetry and the
fact that only one unique chance for success exists.

> So the Sysop is God. Ok, let's hope Murphy's Law doesn't
> apply *too* much to coding AIs...

Intelligence is the force that opposes Murphy's Law. An intelligent,
self-improving AI should teach itself to be above programmer error and
even hardware malfunction.

> Actually, we just don't know what forces would be at work in
> an AI that has reached the "omniscient" level. Maybe some
> form of terminal corruption is inevitable somewhere along

Aside from your social paranoia instinct, which is *very* far from home,
there is no more reason to believe this *is* the case than there is to
believe that the AI would inevitably evolve to believe itself a chicken.

> the line, which would certainly explain the apparent lack of
> SI activity in the known universe. Bottom line: there are
> absolutely no guarantees. A dynamic, evolving system
> can't be trusted by defintion.

Said the dynamic, evolving system to the other dynamic, evolving system...

> > It has nothing to do with who, or what, the SIs are. Their
> > "right" is not a matter of social dominance due to superior
> > formidability, but a form of reasoning that both you or I would
> > inevitably agree with if we were only smart enough.
>
> Yes, *we* could "inevitably" agree that it makes perfect sense
> to disassemble all lower life forms.

You're anthropomorphizing - social hierarchies are human-evolutionary artifacts.

I'm not talking about an observer-dependent process. I mean that if you
and I were Minds and an Objective third Mind decided we needed killing,
we would agree and commit suicide. Again, there is absolutely no
a-priori reason to expect that goals would converge to some kind of
observer-dependent set of answers. If goals converge at all, I'd expect
them to behave like all other known "convergent mental structures which
do not depend on initial opinions", which we usually call "facts".

Any other scenario may be a theoretical possibility due to
"inscrutability" but it's as likely as convergence to chickenhood.

> Fat comfort to *them* that the
> Almighty have decided so in their infinite wisdom. I wouldn't
> be much surprised if one of the "eternal truths" turns out to
> be "might makes right".

*I* would. "Might makes right" is an evolutionary premise which I
understand in its evolutionary context. To find it in a Mind would be
as surprising as discovering an inevitable taste for chocolate.

> > That human moral reasoning is observer-dependent follows from the
> > historical fact that the dominant unit of evolutionary selection was the
> > individual. There is no reason to expect similar effects to arise in a
> > system that be programmed to conceptualize itself as a design component
> > as easily as an agent or an individual, and more likely would simply
> > have not have any moral "self" at all. I mean, something resembling an
> > "I" will probably evolve whether we design it or not, but that doesn't
> > imply that the "I" gets tangled up in the goal system. Why would it?
>
> I don't know, it's rather difficult to imagine an untangled "I".

So do I, but I can take a shot at it!

> This is basically an alien form of life...

Yes, it is!

> > I would regard this as good, we're fundamentally and
> > mortally opposed, and fortunately neither of us has any influence
> > whatsoever on how it turns out.) But while the seed AI isn't at the
> > level where it can be *sure* that no objective meaning exists,
>
> How can the AI be sure that it has reached this mystical
> plateau of true objective meaning? Because it "just knows"?

When it reaches the point where any objective morality that exists would
probably have been discovered; i.e. when the lack of that discovery
counts as input to the Bayesian Probability Theorem. When continuing to
act on the possibility would be transparently stupid even to you and I,
we might expect it to be transparently stupid to the AI.

> > it has to
> > take into account the possibility that it does. The seed would tend to
> > reason: "Well, I'm not sure whether or not this is the right thing to
> > do, but if I just upgrade myself a bit farther, then I'll be sure."
>
> Ad infinitum. A bit like chasing the horizon, IMO.

Not at all, as stated above.

> > The only way the Pause would be a valid suggestion is if there's such a
> > good reason for doing it that the seed itself would come up with the
> > suggestion independently.
>
> Again, it's all a matter of perspective; what's good from the AI's
> pov doesn't have to be good from *our* pov.

Sigh.

This is the basic anthropomorphization.

What you call a "point-of-view" is an evolutionary artifact. I can see
it from where I stand, a great tangled mess of instincts and intuitions
and cached experience about observer-dependent models of the Universe,
observer-dependent utility functions, the likelihood that others will
"cheat" or tend to calculate allegedly group-based utility functions in
ways that enhance their own benefit, and so on.

I can also see that it won't exist in an AI. An AI *does not have* a
pov, and your whole don't-trust-the-AIs scenario rests on that basic
anxiety, that the AI will make decisions biased toward itself. There is
just no reason for that to happen. An AI doesn't have a pov. It's
every bit as likely to make decisions biased towards you, or towards the
"viewpoint" of some quark somewhere, as it is to make decisions biased
toward itself. The tendency that exists in humans is the product of
evolutionary selection and nothing else.

Having a pov is natural to evolved organisms, not to minds in general.

> There may be a
> fundamental conflict between the two without one being
> more (ojectively) "right" than the other. Compare it to the
> situation of, for example, a snake and a rodent. The snake
> has to feed in order to survive, while the rodent obviously
> needs to avoid capture to achieve the same goal. Has one
> more "right" to live than the other? No, they are both "right"
> from their own pov. It would probably be no different in a
> human-AI conflict of interest.

It would be very much different. Both snakes and rodents evolved.
Humans may have evolved, but AIs haven't.

> > In which case,
> > delaying Sysop deployment involves many definite risks.
> [snip risks, on which we agree]
>
> > You and a thousand other Mind-wannabes wish to
> > ensure your safety and survival. One course of action is to upload,
> > grow on independent hardware, and then fight it out in space. If
> > defense turns out to have an absolute, laws-of-physics advantage over
> > offense, then you'll all be safe. I think this is extraordinarily
> > unlikely to be the case, given the historical trend. If offense has an
> > advantage over defense, you'll all fight it out until only one Mind
> > remains with a monopoly on available resources.
>
> True, but who knows what the ascending Minds will decide
> in their "infinite" wisdom? Perhaps they'll come up with some
> really clever solution. Perhaps they'll even consider the needs
> of the rest of humanity. And yes, maybe the first thing they'll
> do is make some really big guns and try to blast eachother.
> Only one way to find out...

If you're going to invoke inscrutability, I can invoke it too. If the
opportunity for many peaceful happy free Minds exists, then the Sysop
should be able to see it and set us all free.

> > However, is the utility
> > of having the whole Solar System to yourself really a thousand times the
> > utility, the "fun", of having a thousandth of the available resources?
> > No. You cannot have a thousand times as much fun with a thousand times
> > as much mass.
>
> I think you'd have to be a SI to know that for sure.

The question is not what the SI thinks. It is what *you* think, you,
den Otter and your organization - or whatever other organization fills
that slot in the script - that will determine whether *you* think you
need a peace treaty.

Now, would *you* - right here, right now - rather have the certainty of
getting one six-billionth the mass of the Solar System, or would you
prefer a one-in-six-billion chance of getting it all?

> > You need a peace treaty. You need a system, a process, which ensures
> > your safety.
>
> MAD. Not a very stable system, perhaps, but neither are
> superintelligent, evolving Sysops.

Who said anything about a plural? One Sysop. One Power to enforce The
Rules that keep all the other Powers from playing cops-and-robbers with
the Solar System. That's the point.

As for MAD, it won't work for nano and it won't work for Powers. It's
not just the first-strike problem; we're talking about a system at
extrema with no balancing factors.

> In a way yes, for the time being anyway. The main issue now
> is what level of "intelligence" the Sysop should have.

By the time non-self-improving AI becomes possible, self-improving AI
will have advanced well past the point where I can build one in a basement.

Given enough intelligence to be useful, the AI would have enough
intelligence to independently see the necessity of improving itself further.

I wouldn't trust an unPowered Sysop to upload me, anyway.

In short, I doubt the possibility of building a dumb one.

> > The other half of the "low utility" part is philosophical; if there are
> > objective goals, you'll converge to them too, thus accomplishing exactly
> > the same thing as if some other Mind converged to those goals. Whether
> > or not the Mind happens to be "you" is an arbitrary prejudice; if the
> > Otterborn Mind is bit-by-bit indistinguishable from an Eliezerborn or
> > AIborn Mind, but you take an action based on the distinction which
> > decreases your over-all-branches probability of genuine personal
> > survival, it's a stupid prejudice.
>
> To me genuine personal survival is not just the destination,
> (if there even is such a thing) but also the journey. Call it an
> irrational prejudice, but hey, I happen to like the illusion of
> continuity. A solution is only acceptable if it respects this
> pov. To ask of me to just forget about continuity is like asking
> you to just forget about the Singularity.

I *would* just forget about the Singularity, if it was necessary.
Getting to my current philosophical position required that I let go of a
number of irrational prejudices, some of which I was very much attached
to. I let go of my irrational prejudice in favor of the
Singularity-outcome, and admitted the possibility of nanowar. I let go
of my irrational prejudice in favor of a dramatic Singularity, one that
would let me play the hero, in favor of a fast, quiet seed AI.

If it's an irrational prejudice, then let it go.

> > > P.s: do you watch _Angel_ too?
> >
> > Of course.
>
> Ah yes, same here. Nice altruistic chap, isn't he?

And very realistically so, actually. If Angel's will is strong enough
to balance and overcome the built-in predatory instincts of a vampire,
then he's rather unlikely to succumb to lesser moral compromises. He's
also a couple of hundred years old, which is another force operating to
move him to the extrema of whatever position he takes.

Gotta develop an instinct for those non-gaussians if you're gonna think
about Minds...

-- 
       sentience@pobox.com      Eliezer S. Yudkowsky
          http://pobox.com/~sentience/beyond.html
                 Member, Extropy Institute
           Senior Associate, Foresight Institute



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:17 MDT