Re: Otter vs. Yudkowsky

From: D.den Otter (neosapient@geocities.com)
Date: Wed Mar 15 2000 - 07:01:52 MST


----------
> From: Eliezer S. Yudkowsky <sentience@pobox.com>

[snip agreement]
> The points in dispute are these:
>
> 1) From a strictly selfish perspective, does the likely utility of
> attempting to upload yourself outweigh the utility of designing a Sysop
> Mind? Sub-disputes include (2) whether it's practically possible to
> develop perfect uploading before China initiates a nanowar or Eliezer
> runs a seed AI; (3) whether the fact that humans can be trusted no more
> than AIs will force your group to adopt a Sysop Mind approach in any
> case;

Yes, this pretty much sums it up.

(4) whether telling others that the Transtopians are going to
> upload and then erase the rest of humanity

This is optional. Also, this is not specifically a "transtopian" thing;
if it's the most logical course of action, *all* ascending Minds
would erase the rest of humanity. Wiping out humanity for
its own sake is hardly interesting, and can be done more
conveniently in simulations anyway, if so desired.

> will generate opposition
> making it impossible for you to gain access to uploading prerequisite
> technologies.

Opposition in the form of government bans should be no
problem for a dedicated group. Besides, look who's
talking; your pages are bound to scare the living crap
out of many people, and not just the fanatical luddites
(see KPJ's recent post regarding the Bill Joy poll at
http://cgi.zdnet.com/zdpoll/question.html?pollid=17054&action=a),
for example). People are duly getting worried, and
statements like "there's a significant chance that AIs
will indeed wipe us out, but hey, that's cool as long as
they find the meaning of life" aren't likely to calm them
down.
 
> I think that enough of the disputed points are dependent upon concrete
> facts to establish an unambiguous rational answer in favor of seed AI.

I think we'll need more details here. This may be the most
important issue ever, so surely it deserves a more elab
 
> > Well, it's certainly better than nothing, but the fact remains that
> > the Sysop mind could, at any time and for any reason, decide
>
> If it doesn't happen in the first few hours, you're safe forever.

"Objective" hours? That could be thousands if not millions
of subjective years from the Sysop's pov. That's plenty of
time to mutate beyond recognition.
 
> > that it has better things to do than babysitting the OtterMind,
> > and terminate/adapt the latter. Being completely at someone's
> > something's mercy is never a good idea.
>
> And here we come to the true crux of the problem. You don't want to be
> at someone else's mercy.

Damn right. This could be the only chance we'll ever get to
become truly "free" (as far as mental structures allow), and
it would be a shame to waste it.

> You don't want to entrust your fate to the
> hidden variables. You want to choose a course of action that puts you
> in the driver's seat, even if it kills you. You're prejudiced in favor
> of plans that include what look like forceful actions against those
> yucky possibilities, even if the actions are ineffective and have awful
> side effects. This is the same intuitive underpinning that underlies
> Welfare, bombing Kosovo and the War on Drugs.

> Screw personal independence and all such slogans; the fundamental
> principle of Transhumanism is *rationality*. If maintaining personal
> control is dumb, then you shouldn't do it.

Handing over control to AIs may initially offer an advantage, i.e.
a greater chance of surviving the early phases of the Singularity,
but may very well be a long-term terminal error. Uploading
with the help of "dumb" computers on the other hand may
increase your initial risks, but *if* you make it you've made
it good. The key issue is: how big are these chances *really*?
Ok, you say something like 30% vs 0,1%, but how exactly
did you get these figures? Is there a particular passage in
_Coding..._ or _Plan to.._ that deals with this issue?
 
> > Who monitors the Sysop?
>
> I've considered the utility of including a "programmer override", but my
> current belief is that the social anxiety generated by planning to
> include such an override has a negative utility that exceeds the danger
> of not having an override. We'll just have to get it right the first
> time (meaning not flawlessness but flaw tolerance, of course).

So the Sysop is God. Ok, let's hope Murphy's Law doesn't
apply *too* much to coding AIs...

[regarding mind control in a human society]
> What makes the system unacceptable, if implemented by humans, is that
> the humans have evolved to be corruptible and have an incredibly bad
> track record at that sort of thing. All the antigovernmental heuristics
> of transhumanism have evolved from the simple fact that, historically,
> government doesn't work. However, an omniscient AI is no more likely to
> become corrupt than a robot is likely to start lusting after human women.

Actually, we just don't know what forces would be at work in
an AI that has reached the "omniscient" level. Maybe some
form of terminal corruption is inevitable somewhere along
the line, which would certainly explain the apparent lack of
SI activity in the known universe. Bottom line: there are
absolutely no guarantees. A dynamic, evolving system
can't be trusted by defintion.

> > And something else: you belief that a SI can do with
> > us as it pleases because of its massively superior
> > intelligence. Superior intelligence = superior morality,
> > correct?
>
> No. I believe that, for some level of intelligence above X - where X is
> known to be higher than the level attained by modern humans in modern
> civilization - it becomes possible to see the objectively correct moral
> decisions.

Maybe. Personally I think that SIs will only get a somewhat deeper
understanding of the relativity of it all. The idea that any
intelligence,
no matter how formidable, could say at some point that it "knows
everything" seems utterly absurd. It could, for example, be part
of an even smarter entity's simulation without ever finding out.

> It has nothing to do with who, or what, the SIs are. Their
> "right" is not a matter of social dominance due to superior
> formidability, but a form of reasoning that both you or I would
> inevitably agree with if we were only smart enough.

Yes, *we* could "inevitably" agree that it makes perfect sense
to disassemble all lower life forms. Fat comfort to *them* that the
Almighty have decided so in their infinite wisdom. I wouldn't
be much surprised if one of the "eternal truths" turns out to
be "might makes right".
 
> That human moral reasoning is observer-dependent follows from the
> historical fact that the dominant unit of evolutionary selection was the
> individual. There is no reason to expect similar effects to arise in a
> system that be programmed to conceptualize itself as a design component
> as easily as an agent or an individual, and more likely would simply
> have not have any moral "self" at all. I mean, something resembling an
> "I" will probably evolve whether we design it or not, but that doesn't
> imply that the "I" gets tangled up in the goal system. Why would it?

I don't know, it's rather difficult to imagine an untangled "I". This
is basically an alien form of life...

> I've considered the possibility of a seed AI designed to pause itself
> before it reached the point of being able to discover an objective
> morality, upload humanity, give us a couple of thousand subjective
> millennia of hiatus, and then continue. This way, regardless of how the
> ultimate answers turn out, everyone could have a reasonable amount of
> fun. I'm willing to plan to waste a few objective hours if that plan
> relieves a few anxieties.

Sounds good, but...

> The problem with this picture is that I don't think it's a plausible
> "suggestion". The obvious historical genesis of the suggestion is your
> fear that the Mind will discover objective meaning. (You would regard
> this as bad,

Only if it goes against my enlightened self-interest!

> I would regard this as good, we're fundamentally and
> mortally opposed, and fortunately neither of us has any influence
> whatsoever on how it turns out.) But while the seed AI isn't at the
> level where it can be *sure* that no objective meaning exists,

How can the AI be sure that it has reached this mystical
plateau of true objective meaning? Because it "just knows"?

> it has to
> take into account the possibility that it does. The seed would tend to
> reason: "Well, I'm not sure whether or not this is the right thing to
> do, but if I just upgrade myself a bit farther, then I'll be sure."

Ad infinitum. A bit like chasing the horizon, IMO.

> And
> in fact, this *is* the correct chain of reasoning, and I'm not sure I or
> anyone else could contradict it.

It's logical to improve oneself indefinitely, but this is not the
same as seeking THE truth.
 
> The only way the Pause would be a valid suggestion is if there's such a
> good reason for doing it that the seed itself would come up with the
> suggestion independently.

Again, it's all a matter of perspective; what's good from the AI's
pov doesn't have to be good from *our* pov. There may be a
fundamental conflict between the two without one being
more (ojectively) "right" than the other. Compare it to the
situation of, for example, a snake and a rodent. The snake
has to feed in order to survive, while the rodent obviously
needs to avoid capture to achieve the same goal. Has one
more "right" to live than the other? No, they are both "right"
from their own pov. It would probably be no different in a
human-AI conflict of interest.

> The chain of reasoning you're proposing is "destroying humans
> because they pose a potential threat to the goal of protecting humans".
> I mean, "destroying humans because they pose a potential threat to the
> goal of manufacturing shoes" might be a "valid" chain of logic, but not
> destroying humans to protect them.

No, I'm proposing that the AI, Sysop or otherwise, could dump
the initial set of instructions (protect humans etc.) altogether
for some known (glitch) or yet-unknown (alien mental dynamics)
reason, in which case wiping out humanity would no longer
be fundamentally illogical, but perhaps even a *necessary*
measure (in support of some other, unknown, goal). As long
as the AI sticks to its original goals there shouldn't be a
problem. Presumably. I repeat: you can't trust an (ultra-
rapidly) evolving system.

> I think that we have enough concrete knowledge of the social situation,
> and of the pace of technological development, to say that a Sysop
> arrangement will almost certainly become necessary.

Show me the evidence. If it adds up, I'm game.

> In which case,
> delaying Sysop deployment involves many definite risks.
[snip risks, on which we agree]

> You and a thousand other Mind-wannabes wish to
> ensure your safety and survival. One course of action is to upload,
> grow on independent hardware, and then fight it out in space. If
> defense turns out to have an absolute, laws-of-physics advantage over
> offense, then you'll all be safe. I think this is extraordinarily
> unlikely to be the case, given the historical trend. If offense has an
> advantage over defense, you'll all fight it out until only one Mind
> remains with a monopoly on available resources.

True, but who knows what the ascending Minds will decide
in their "infinite" wisdom? Perhaps they'll come up with some
really clever solution. Perhaps they'll even consider the needs
of the rest of humanity. And yes, maybe the first thing they'll
do is make some really big guns and try to blast eachother.
Only one way to find out...

> However, is the utility
> of having the whole Solar System to yourself really a thousand times the
> utility, the "fun", of having a thousandth of the available resources?
> No. You cannot have a thousand times as much fun with a thousand times
> as much mass.

I think you'd have to be a SI to know that for sure. But, regardless of
this, you don't need others to have "fun" once you can manipulate
your cognitive/emotional structure, and have the option of creating
any number of simulation worlds. A true SI is by definition fully
self-sufficient, or at least has the option of becoming fully self-
sufficient at any given time by tweaking its mental structure.
Whether or not it would actually need all the resources of the
solar system/galaxy/universe is besides the point.
 
> You need a peace treaty. You need a system, a process, which ensures
> your safety.

MAD. Not a very stable system, perhaps, but neither are
superintelligent, evolving Sysops.

> The
> clever thing to do would be to create a Sysop which ensures that the
> thousand uploadees do not harm each other, which divides resources
> equally and executes other commonsense rules. Offense may win over
> defense in physical reality, but not in software. But now you're just
> converging straight back to the same method I proposed...

In a way yes, for the time being anyway. The main issue now
is what level of "intelligence" the Sysop should have.
 
> The other half of the "low utility" part is philosophical; if there are
> objective goals, you'll converge to them too, thus accomplishing exactly
> the same thing as if some other Mind converged to those goals. Whether
> or not the Mind happens to be "you" is an arbitrary prejudice; if the
> Otterborn Mind is bit-by-bit indistinguishable from an Eliezerborn or
> AIborn Mind, but you take an action based on the distinction which
> decreases your over-all-branches probability of genuine personal
> survival, it's a stupid prejudice.

To me genuine personal survival is not just the destination,
(if there even is such a thing) but also the journey. Call it an
irrational prejudice, but hey, I happen to like the illusion of
continuity. A solution is only acceptable if it respects this
pov. To ask of me to just forget about continuity is like asking
you to just forget about the Singularity.
 
> > Well, that's your educated (and perhaps a wee bit biased)
> > guess, anyway. We'll see.
>
> Perhaps. I do try to be very careful about that sort of thing, though.

I sure hope so.
 
> > P.s: do you watch _Angel_ too?
>
> Of course.

Ah yes, same here. Nice altruistic chap, isn't he?



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:16 MDT