Re: Sysops and enlightened despots

From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Aug 03 2001 - 09:37:31 MDT


On Fri, Aug 03, 2001 at 03:03:44AM -0400, Brian Atkins wrote:
> Reason wrote:
>
> I have yet to see a better solution to the issue. At some point the matter
> (as in atoms) must fall under someone's control, and personally I don't
> relish the idea of having to constantly protect myself from everyone else
> who can't be trusted with nanotech and AI. All it takes is one Blight to
> wipe us out. That kind of threat does not go away as humans progress to
> transhumanity, rather it increases in likelihood. What is the stable state
> if not Sysop or total death? There may be some other possibilities, can
> you name some?

Stable states are dead states. I would say another possibility would be an
eternally growing selforganised critical state - sure, disasters happen,
but countermeasures also emerge. The whole is constantly evolving and
changing.

Having (post)human development constrained to a small part of the available
technology/culture/whatever space in order to ensure safety is going to run
into a Gödel-like trap. There are likely undecidable threats out there,
things that cannot be determined to be dangerous or not using any finite
computational capability. Hence the only way of ensuring security is to
limit ourselves to a finite space - which goes counter to a lot of the core
transhumanist ideas. Or the sysop would have to allow undecidable risks and
similar hard-to-detect threats. One category of threats to worry about are
of course threats the sysop would itself run into while looking for threats
- they could wipe us out exactly because (post)humanity had not been
allowed the necessary dispersal and freedom that might otherwise have
saved at least some.

This is essentially the same problem as any enlightened despot scheme (and
there is of course the huge range of ethical problems with such schemes
too), put in a fresh sf setting. Enlightened despots make bad rulers
because they cannot exist: they need accurate information about the
preferences of everybody, which is not possible for any human ruler. The
version 2.0 scenario assuming the omniscient AI runs into the same problem
anyway: it would need to handle an amount of information of the same order
of magnitude as the information processing in the entire society. Hence it
would itself be a sizeable fraction of society information-wise (and itself
a source of plenty of input in need of analysis). Given such technology, as
soon as any other system in society becomes more complex the ruler AI would
have to become more complex to keep ahead. Again the outcome is that either
growth must be limited or the society is going to end up embedded within an
ever more complex system that spends most of its resources monitoring
itself.

The idea that we need someone to protect ourselves from ourselves in this
way really hinges on the idea that certain technologies are instantly and
throughly devastating, and assumes that the only way they can be handled is
if as few beings as possible get their manipulators on them. Both of these
assumptions are debatable, and I think it is dangerous to blithely say that
the sysop scenario is the only realistic alternative to global death. It
forecloses further analysis of the assumptions and alternatives by
suggesting that the question is really settled and there is only one way.
It also paints a simplistic picture of choices that could easily be used to
attack transhumanism, either as strongly hegemonic ("They want to make a
computer to rule the world! Gosh, I only thought those claims they are
actually thinking like movie mad scientists were just insults.") or as
totally out of touch with reality ("Why build a super-AI (as if such stuff
could exist in the first place) when we can just follow Bill Joy and
relinquish dangerous technology?!").

A lot of this was hashed through in the article about nanarchy in Extropy,
I think. Nothing new under the Dyson.

Existential risks are a problem, but I'm worried people are overly
simplistic when solving them. My own sketch above about a self-organised
critical state is complex, messy and hard to analyse in detail because it
is evolving. That is a memetic handicap, since solutions that are easy to
state and grasp sound so much better - relinquishment of technology or
sysops fit in wonderfully with the memetic receptors people have. But
complex problems seldom have simple neat solutions. Bill Joy is wrong
because his solution cannot practically work, the sysop is simple to state
but hides all the enormous complexity inside the system.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:01 MDT