At 06:45 PM 3/13/00 -0600, sentience@pobox.com wrote:
[much snippage of good stuff that does not need me ranting at it]
>And here we come to the true crux of the problem. You don't want to be
>at someone else's mercy. You don't want to entrust your fate to the
>hidden variables. You want to choose a course of action that puts you
>in the driver's seat, even if it kills you. You're prejudiced in favor
>of plans that include what look like forceful actions against those
>yucky possibilities, even if the actions are ineffective and have awful
>side effects. This is the same intuitive underpinning that underlies
>Welfare, bombing Kosovo and the War on Drugs.
>
>Screw personal independence and all such slogans; the fundamental
>principle of Transhumanism is *rationality*. If maintaining personal
>control is dumb, then you shouldn't do it.
do terms like "dumb" kinda lose meaning in the absence of personal
control? i think so.
>> Who monitors the Sysop?
>
>I've considered the utility of including a "programmer override", but my
>current belief is that the social anxiety generated by planning to
>include such an override has a negative utility that exceeds the danger
>of not having an override. We'll just have to get it right the first
>time (meaning not flawlessness but flaw tolerance, of course).
>
>> Self-defense excluded, I hope. Otherwise the OtterMind would
>> be a sitting duck.
>
>No, the Sysop Mind would defend you.
how kind of the sysop. theocracy might sound nifty, but i don't think it
would be stable, let alone doable, from a monkey point of view.
>> Let's look at it this way: what if the government proposed a
>> system like this, i.e. everyone gets a chip implant that will
>> monitor his/her behaviour, and correct it if necessary so that
>> people no longer can (intentionally) harm eachother. How
>> would the public react? How would the members of this list
>> react? Wild guess: most wouldn't be too happy about it
>> (to use a titanic understatement). Blatant infringement of
>> fundamental rights and all that. Well, right they are. Now,
>> what would make this system all of a sudden "acceptable"
>> in a SI future? Does an increase in intelligence justify
>> this kind of coercion?
>
>What makes the system unacceptable, if implemented by humans, is that
>the humans have evolved to be corruptible and have an incredibly bad
>track record at that sort of thing. All the antigovernmental heuristics
>of transhumanism have evolved from the simple fact that, historically,
>government doesn't work. However, an omniscient AI is no more likely to
>become corrupt than a robot is likely to start lusting after human women.
an omniscient ai is pretty much inscrutable, right? i don't know how we
can evaluate the inscrutable's chances of becoming what we would call
"corrupt". i think the least inscrutable thing about an omniscient
intelligence would be its need for resources. other then that... i dunno.
>> And something else: you belief that a SI can do with
>> us as it pleases because of its massively superior
>> intelligence. Superior intelligence = superior morality,
>> correct?
>
>No. I believe that, for some level of intelligence above X - where X is
>known to be higher than the level attained by modern humans in modern
>civilization - it becomes possible to see the objectively correct moral
>decisions. It has nothing to do with who, or what, the SIs are. Their
>"right" is not a matter of social dominance due to superior
>formidability, but a form of reasoning that both you or I would
>inevitably agree with if we were only smart enough.
>
>That human moral reasoning is observer-dependent follows from the
>historical fact that the dominant unit of evolutionary selection was the
>individual. There is no reason to expect similar effects to arise in a
>system that be programmed to conceptualize itself as a design component
>as easily as an agent or an individual, and more likely would simply
>have not have any moral "self" at all. I mean, something resembling an
>"I" will probably evolve whether we design it or not, but that doesn't
>imply that the "I" gets tangled up in the goal system. Why would it?
i fail to see how it could not get tangled up... even in a case like "in
order to maximize greeness, the resources over there should be used in this
manner" (which has no self-subject implied) a distenction must be made
between resources more directly controlled (what i would call "my stuff")
and resources more indirectly controlled (what i would call "other stuff"),
etc... and as soon as that distenction is made, degrees of
ownership/beingness/whatever is implied, and from there promptly gets mixed
up in the goal system...
[mad snippage...]
>> Well, see above. This would only make sense in an *acutely*
>> desperate situation. By all means, go ahead with your research,
>> but I'd wait with the final steps until we know for sure
>> that uploading/space escape isn't going to make it. In that
>> case I'd certainly support a (temporary!) Sysop arrangement.
>
>I think that we have enough concrete knowledge of the social situation,
>and of the pace of technological development, to say that a Sysop
>arrangement will almost certainly become necessary.
necessary? in the sense that such an arrangement will increase my odds of
survival, etc? i doubt it, if only because the odds against my survival
must be dire indeed (understatement) to justify the massive amount of work
that would be required to make a sysop; effort that could rather be
invested towards, say, getting off this planet; where getting off the
planet would be a better stopgap anyway.
unless, of course, you come up with a well thought out essay on the order
of "coding a transhuman ai" discussing the creation of a specialized sysop
ai. if you see a way to make it more doable then uploading, space travel,
"normal" transcendent ai, and nanowar, more power to ya... but that sounds
way tougher then "coding" was. maybe i'll trend towards advocating sysop
creation when i think its doable in the relevent time frame.
[more snippage]
>When I say that the increment of utility is low, what I mean is that you
>and your cohorts will inevitably decide to execute a Sysop-like
>arrangement in any case.
i trend towards advocating a very dumb sysop, if it can be called that...
a "simple" upload manager...
>You and a thousand other Mind-wannabes wish to
>ensure your safety and survival. One course of action is to upload,
>grow on independent hardware, and then fight it out in space.
or just run the fuck away, and hopefully not fight it out for a very, very
long time, if ever. dibs on alpha centauri... ;)
>If defense turns out to have an absolute, laws-of-physics advantage over
>offense, then you'll all be safe. I think this is extraordinarily
>unlikely to be the case, given the historical trend.
historical trend is better then nothing, of course, but i dunno if
historical trend really cuts it in this case...
>If offense has an advantage over defense, you'll all fight it out until
>only one Mind remains with a monopoly on available resources.
or we all will go Elsewhere... or we will all stalemate... or we will all
borgify... or we will all decide to commit suicide... or (insert
possibilities here that only a Power could think of).
>However, is the utility
>of having the whole Solar System to yourself really a thousand times the
>utility, the "fun", of having a thousandth of the available resources?
>No. You cannot have a thousand times as much fun with a thousand times
>as much mass.
i don't see how we can know that. what if, just for example, we need the
entire solar system to make a very special kind of black hole? geez...
[insert rant about the incomperhensibility of Powers here].
>You need a peace treaty. You need a system, a process, which ensures
>your safety. Humans (and the then-hypothetical human-derived Minds) are
>not knowably transparent or trustworthy, and your safety cannot be
>trusted to either a human judge or a process composed of humans. The
>clever thing to do would be to create a Sysop which ensures that the
>thousand uploadees do not harm each other, which divides resources
>equally and executes other commonsense rules. Offense may win over
>defense in physical reality, but not in software. But now you're just
>converging straight back to the same method I proposed...
mutually assured destruction seems more clever then a sysop.
>The other half of the "low utility" part is philosophical; if there are
>objective goals, you'll converge to them too, thus accomplishing exactly
>the same thing as if some other Mind converged to those goals. Whether
>or not the Mind happens to be "you" is an arbitrary prejudice; if the
>Otterborn Mind is bit-by-bit indistinguishable from an Eliezerborn or
>AIborn Mind, but you take an action based on the distinction which
>decreases your over-all-branches probability of genuine personal
>survival, it's a stupid prejudice.
what if the objective goal is to attain as much "individuality" (whatever
that turns out to be) as possible... granted, borgification might happen.
or not. i just wanna keep as many options open as possible.
>> That's a tough one. Is "survival" an objective (super)goal? One
>> must be alive to have (other) goals, that's for sure, but this
>> makes it a super-subgoal rather than a supergoal. Survival
>> for its own sake is rather pointless. In the end it still comes
>> down to arbitrary, subjective choices IMO.
>
>Precisely; and, in this event, it's possible to construct a Living Pact
>which runs on available hardware and gives you what you want at no
>threat to anyone else, thus maximizing the social and technical
>plausibility of the outcome.
what if i want to *be* said Pact?
sayke, v2.3.05
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:05:05 MDT