Re: One Unity, Different Ideologies, all in the same universe

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Dec 26 2001 - 17:06:32 MST


Anders Sandberg wrote:
>
> On Wed, Dec 26, 2001 at 03:28:54PM -0500, Eliezer S. Yudkowsky wrote:
> >
> > AI slavery is less expensive than biological slavery, but if you insist
> > that there is any difference whatever between the two, it's easy enough to
> > imagine Osama bin Laden biologically cloning seventy-two helpless
> > virgins. From my perspective, these are people too and they have just as
> > much claim on my sympathy as you or anyone else. If it's worth saving the
> > world, it's worth saving them too.
>
> Sure. Do you think Osamaland would be accepted by any reasonable federative
> constitution?

I think Osama will not *apply to join* your federative constitution. I
think he'll set up his own asteroid cluster (or whatever level of
technology it is you're postulating) and do whatever the hell he pleases.
I want to know what your Federation is going to do about it.

Until you answer this question you are simply postulating a singleton
scenario without the singleton; i.e., everyone in the universe has
mysteriously agreed to adopt the Anders Sandberg Constitution, whose
bylaws look charmingly like Friendliness's nonviolation of volition.

> My
> point is that we are talking about getting real people set up real
> political systems in the real world

*cough*hopeless*cough*

Small political changes are one thing. Total rewriting of the worldwide
political system, through human social channels, pre-Singularity, seems
rather unlikely.

> > 1) This sounds to me like a set of heuristics unadapted to dealing with
> > existential risks (and not just the Bangs, either). Some errors are
> > nonrecoverable. If Robin Hanson's cosmic-commons colonization race turns
> > out to be a Whimper, then we had better not get started down that road,
> > because once begun it won't stop.
>
> Which existential risks are relevant? When planning to act in any way you
> have to make estimates of risks. If the risk is too great, you become
> careful and may even avoid certain actions if the risk is unacceptable. In
> many cases both the probability and the severity of the risk are unknown,
> and have to be gradually refined given what we learn. Basing your actions
> on your prior estimate and then never revising it would be irrational. So
> what is needed is systems that allow us to learn and act according to what
> we have learned.

Um... no, what's needed is a system that gets it right the first time.
That's kind of the distinguishing quality of existential risks. Even if
you drop the stakes a bit, down to the level of say nuclear war, you
really don't want to plan on building a system that eventually learns to
manage global thermonuclear wars, after four or five tries. With
existential risks, of course, even a single failure is unacceptable.

> I think there is a certain risk involved with the concept of existential
> risks, actually. Given the common misinterpretation of the precautionary
> principle as "do not do anything that has not been proven safe", even the
> idea of existential risks provides an excellent and rhetorically powerful
> argument for stasis. Leon Kass is essentially using this in his
> anti-posthuman campaign: since there may be existential risks involved with
> posthumanity *it must be prevented*.

The flaw in this idea is simply that minimizing global existential risks
sometimes involves a necessary existential risk, such as the deliberate
development of superintelligence. Not that existential risks are somehow
okay. Leon Kass is right to be nervous - our situation *is* every bit as
precarious as "one single failure leads to total destruction" suggests.
It's just that his solution is absolutely unworkable and suicidal, that's
all.

> > 2) The cost in sentient suffering in a single non-federation community,
> > under the framework you present, could enormously exceed the sum of all
> > sentient suffering in history up until this point. This is not a trivial
> > error.
>
> No, but that is not something the federation was supposed to solve either.

That's going to make the Sandberg Federation rather unattractive if
there's a strategy that does solve that problem... even if it involves
(gasp!) a global optimum.

> The fact that there is awful suffering and tyrrany in some countries
> doesn't invalidate the political system of the US. The federation is not
> based on an utilitarist ethical perspective where the goal is to maximize
> global happiness.

Uh... what the heck is it based on then?

> I distrust the search for global optima and solutions that solve every
> problem.

I think you must be confusing "global optimum" and "perfection".
Mistrusting the search for perfection is a common human heuristic, which
is often quite correct in an imperfect world where predictions of
perfection are more often produced by wishful thinking than by an unbiased
model of a real opportunity for perfection. Of course, exporting this
heuristic into the superintelligent spaces is more often the product of
anthropomorphism than of principled analysis.

> The world is complex, changing and filled with adaptation, making
> any such absolutist solution futile, or worse, limiting. I prefer to view
> every proposed solution as partial and under revision as we learn more.

Hm. Well, your Sandberg Federation must be an absolutist solution too,
since it doesn't seem to allow for communities disobeying the laws of
physics. And if everyone is going to obey the laws of physics anyway, why
not go the whole hog and use intelligent substrate to add nonviolation of
volition to the rule set? You can get just as much variation running on
top of intelligent substrate as running on top of the laws of physics, and
it's a lot safer for everyone.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:31 MDT