Re: The Politics of Dancing

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Dec 06 2001 - 07:03:13 MST


Damien Broderick wrote:
>
> At 07:03 PM 12/5/01 -0800, Robert wrote:
>
> >I'm not an individual who strongly favors "court politics", so I do
> >not consider myself to be a "singularitarian".
>
> Apologies for using the wrong word there: I know that Eliezer and his
> colleagues have appropriated this term.

For the record, it was appropriated with the permission of Mark Plus, the
original inventor. And it was redefined simply to mean "Singularity
activist" rather than "Singularity expecter".
 
> On the matter of whether Sysops and other less congenial forms of what I've
> lately suggested calling Custodians or Stewards would have the odious
> characteristics you ascribe to them, Eliezer repudiated this suggestion on
> SL4. I hope he might cc. that post to the extropian list so all may munch
> on its nutrients.

Okey dokey.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

attached mail follows:


Damien Broderick wrote:
>
> I wasn't *really* suggesting `Prefect', you understand (although it amuses
> me that it's an anagram of `Perfect'); I was gesturing toward the nature of
> the beast under discussion. `Monitor' is another possibility; as are
> `Custodian', `Protector' and `Steward'. The last of these is perhaps the
> most general and least offensive... although the idea itself remains rather
> offensive however it's parsed.

Doesn't that last remark say it all...

If you think the idea has offensive consequences, you're going to pick a
term that has connotations that remind you of those offensive
consequences. Thus, from our perspective, short-circuiting the process of
rational argument, which should start with a morally neutral term
describing what the hypothesis *is*.

A Sysop is not a Prefect, Monitor, Custodian, Protector, or Steward. At
most it might be a Protector, and the reason this term is salvageable is
that a "protector" does not specify how something is being protected, or
for what motive; the other terms all have specific connotations of
political and moral authority in a human social context.

The Sysop is not a human mind. If it were, most of this would be nonsense
and the rest would be actively dangerous. This is something that becomes
possible only when you step outside the realm of evolved minds and start
considering what a mind-in-general can be asked to do. If you import
terms that have specific connotations and meanings in the context of human
society, you are anthropomorphizing the whole situation; you have sucked
all the interestingly alien aspects out of it.

In this sense, Gordon Worley's Unix Scenario, in which "root" is not a
*conscious* process, is psychologically superior to the Sysop Scenario; it
is less likely to be confused with human ideas of gods, fathers, and other
extrema of the "tribal chief" concept. Unfortunately I also think the
Unix Scenario version is less plausible, but that's a separate issue.

Humans have a phobia of minds, which unfortunately extends from human
minds (where it is justified) to minds in general (since no other mind
types were encountered in the ancestral environment). Someone looking
over Gordon Worley's Unix Scenario says "Hm, underlying reality works
according to certain definite physical rules; there are no minds here; I'm
probably safe." Someone in a Sysop Scenario is just as likely to be safe,
but the human instincts look over the Sysop Scenario and say: "There is a
mind here; that mind is likely to act against me," or even worse, "There
is a mind here; this mind is an extrema of concept tribal-chief;
therefore, this mind will boss me around." The motivations of a nonhuman
superintelligence that does not *want* to boss you around can be just as
solid a safeguard as an absolute physical impossibility of interference.
The fact that your sexual habits are of absolutely no concern to the
singleton substrate mean that your midnight assignations might as well be
outside the light cone of the solar system; the only difference is that
nobody else can interfere with you either.

Outside the human realm, dealing with real extremes of cognition instead
of imagined extremes of social categories, superintelligent motivations
can be just as solid and impartial as physical law. Maybe, to reflect
this, we should skip both Sysop Scenario and Unix Reality and go straight
to discussing Michael Anissimov's ontotechnology scenarios. For some
reason there's a rule that says you can't hurt someone without their
consent. Is it because the Sysop predicts a violation of volition?
Because the low-level rules of Unix Reality don't permit the physical
interaction? Because, back in the dawn of the Singularity, the first
Friendly SI made some quiet adjustments to the laws of physics? Because
of something entirely unimaginable? What difference does it really make,
except to human psychology?

If it's theoretically possible for transhumans to retain motivations that
would make them hostile toward other transhumans, then a possible problem
exists of transhuman war or even transhuman existential catastrophe; but,
there exists at least one comprehensible proposed solution to this
problem, and it is therefore disingenuous to present it as unsolvable.
Maybe totally unrestricted technology for everyone in the universe,
including humans who've refused intelligence enhancement and still have
their original emotional architectures, won't threaten the welfare of one
single sentient being, for reasons we can't now understand. But if not,
we know what to do about it. That's all.

The utility of discussing the Sysop Scenario is this: that we retain the
ability to say "There are no known unsolvable problems between us and the
Singularity". Nothing more. It's a *prediction*, not a *decision*;
whether Unix/Sysop/whatever is actually needed would be up to the first
Friendly SI.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:24 MDT