Re: correction Fermi 2

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sat Nov 17 2001 - 15:12:13 MST


Eliezer said:

> I don't see why minds would fragment this way, or why temporal
> distance would increase the degree of fragmentation

I would expect the feasibility of finding "optimal" solutions for
problems is dependent on the degree of autonomy granted to agents.
If the Mars Rover had to check with Houston every time it encounters
a rock an ask "Do I go left or right here?" then its going to be
a pretty slow mission. For example, the things Eliezer is working
on now he would probably have been much less able to work on as
efficiently 1 or 2 years ago due to possible environmental or resource
constraints.

> maybe the incremental utility of fragmentation is less than the
> incremental disutility, especially if the sole point of colonization
> is to rescue a few pre-Singularity civilizations here and there.

The question is how do you define "utility" to super-intelligences?
Say our solar system pulls up right next to an adjacent untransformed
solar system (or a brown dwarf). You send out a swarm of constructor
bots to do a rapid foundation building and clone your super-intelligence.
At that point you have 2 copides of yourself which presumably have
the same goals, moral perspective, etc. But as soon as the solar
systems begin to move apart you presumably begin to diverge. Your
next encounter with your former self may not be for a few billion
years -- at that point it seems unlikely you will recognize your
former self. It is doubtful that you can keep yourself "in-sync"
with each other because as the distance between your "selves"
increases it becomes increasingly expensive to transmit even a
small fraction of the new data the clones are constantly creating.

The only "utility" seems to be that in the process of cloning itself,
the SI can empty half of its presumably filled up memory. I doubt
that it would want to unload the valuable memories, so presumably it
unloads the less valuable memories. That means the clone starts
out rather "handicapped" and potentially has cause for holding a
grudge against you. Or you could split everything 50:50 but then
it would seem that you are rolling the dice with creating an entity
that could become more powerful than yourself.

None of this matters at *this* point in the development of the
Universe since there is a lot of underutilized material to go
around. But we *know* at some point push is going to come to
shove. Whether SIs develop a moral system that says you never
cannablize your neighbor, perhaps instead choosing to run
increasingly slower, remains to be seen.

I think an argument for colonization to rescue pre-Singularity
civilizations is extremely anthropocentric. I think the Zen
of SIs argues that observing the "process" is what is interesting,
not selecting the winners and losers.

> The child mind may still be a cooperator rather than a defector,
> but it will be a competitor, and "nonexistent" is preferred to
> "slightly suboptimal". For example, the child mind might demand
> a share of some resource which is unique to a given universe.

A clone or a child SI is likely to start out as a cooperator
and seems likely to stay that way assuming the Universe in general
has resources allowing its continued existence. Game theoretic
assumptions of "trustability" would seem to make sense and
the risks of berserker bots would seem to encourage that.
However encounters between SIs that allow effective communication
of any significant fraction of their information content will
be rare so divergence between "siblings" or parent/child seems likely.
The only exceptions to this might be families or tribes of SIs that
have gone to the trouble of arranging their orbits in the galaxy so
they can fly as a closely packed family or tribe.

> The probability of other civilizations undergoing their own
> Singularities and demanding their own shares of a unique resource, or of
> such civilizations otherwise having still more divergent goals, is
> sufficiently small that the known creation of a colonizing entity is a
> greater negative utility than the probabilistic positive utility of
> finding a potential competing civilization before its Singularity

There would seem to be only 2 resources -- matter (which SIs can
transmute) and energy (which there is plenty of in interstellar
hydrogen clouds). The goals may diverge (in terms of what SIs
think about or what part of the phase space of to explore) but
it seems questionable that it should evolve into such differences
as the Horta, Klingons, Vulcans and Shape-shifters. Those all
seem like simple explorations of the "animal" phase space nowhere
close to the actual physical limits.

If one located potentially competing civilizations, it seems likely
that observing them up until the point of the Singularity might
provide some useful data. At the current stage of galactic development
allowing them to go through the Singularity is relatively risk-free as
well. Whether they do that seems to depend on the long term utility of
having had more civilizations explore unique vectors through the
Singularity into the post-singularity phase space relative to the
far-distant problem of having more competitors when resources become
scarce.

> There is no way to send out non-independently-thinking probes capable
> of incorporating possible competing civilizations. This is certainly
> reasonable from our perspective, but it must also hold true of all
> possible minds that can be built by a superintelligence.

This would seem to depend on the potentially competing civilizations.
This would suggest that as soon as we start to ramp up the Singularity,
the Galactic Club probe will land in the U.N. courtyard, hand over the
Galactic Club Rules and say "Sign Here" (or ELSE!). I suspect we
would sign, but if we were Klingons we might make a different choice.

> Competing fragmented entities will tend to fragment further, creating
> a potential exponential problem if even a single fragmentation occurs.

Looks that way to me. Of course it doesn't grow very fast because
it depends on encounters with large matter concentrations that can
support cloning or forking. But SIs are thinking on billion-to-trillion
year timescales so they may have more reason for concern than we would.

I think the question may turn on the utility of having civilizations
take unique paths through the Singularity balanced with the risks
that they become rogue SIs post-singularity. It may be that the
rate of civilizations going through the Singularity is so low
that they we are still at the stage of galactic development that
we have to be observed to build up the database of what the markers
are for the development of "good" vs. "evil" SIs. Then in the future
the SIs will know in advance which civilizations to "rollback" to
stages where productive development seems likely. You can value
sentience from a moral perspective but the potential destruction
that could be caused by a rogue SI would seem to weigh in as
a much greater concern than a few billion human lives (at least
from an extropic, utilitarian, SI point of view).

Whether or not SIs clone or fork may in turn depend on whether all
of the post-singularity SI "thought" we anticipate will be able to
come up with good solutions to the resource shortage one expects
in the far distant future.

Robert



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:19 MDT