Re: correction Fermi 2

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Nov 16 2001 - 07:50:33 MST


"Robert J. Bradbury" wrote:
>
> It is far more likely that colonization is frowned upon
> because colonies are likely to be competitors than
> colonization is prohibitively expensive.

That is a very interesting point. I have to say that pretty much
everything I currently know says it's probably an anthropomorphism derived
from battling human tribes - but, if you could really demonstrate that any
sufficiently large internal time-lag will cause *any* mind-in-general to
fragment to the point of its parts taking hostile action against each
other, or otherwise taking actions that the current central part regards
as being a net negative, then that would present a strong reason for any
superintelligence-in-general to collapse into a computational space of
maximal density rather than attempting to colonize the universe.

Even so, I don't see why minds would fragment this way, or why temporal
distance would increase the degree of fragmentation; and even if minor
divergences in goals were to somehow build up, I would still expect the
net total outcome to be a net positive, if not an absolute positive, from
the perspective of a central mind considering fragmentation, or of one
part regarding another part... then again, maybe the incremental utility
of fragmentation is less than the incremental disutility, especially if
the sole point of colonization is to rescue a few pre-Singularity
civilizations here and there.

Doesn't sound right. But it's new to me and it's worth thinking through
in more detail. I guess the hypothesis would formally be as follows:

Hypothesis: The known negative utility of creating a temporally distant
submind is large enough to exceed the probabilistic possible utility of
sending out a colony, and colonization beyond some tight physical boundary
presents such a limit. This rule holds for all, or virtually all,
superintelligences in general.

Supporting subhypotheses would be these:

1) Matter and energy are not conserved resources. From the perspective
of a Friendly mind, the sole utility of exploring the Universe is rescuing
pre-Singularity civilizations. From the perspective of a nonFriendly
mind, there is no point in exploring the Universe at all.

2) Lightspeed limitations hold. You can't even send an FTL wormhole
terminus with a colony ship that travels at sublightspeed. Either there
are no General Relativity workarounds or such workarounds do not operate
over transgalactic distances.

3) Given sufficient temporal separation from a child, a mind-in-general
will expect a degree of fragmentation sufficient to establish that child
as being slightly suboptimal in some way. The child mind may still be a
cooperator rather than a defector, but it will be a competitor, and
"nonexistent" is preferred to "slightly suboptimal". For example, the
child mind might demand a share of some resource which is unique to a
given universe.

4) The probability of other civilizations undergoing their own
Singularities and demanding their own shares of a unique resource, or of
such civilizations otherwise having still more divergent goals, is
sufficiently small that the known creation of a colonizing entity is a
greater negative utility than the probabilistic positive utility of
finding a potential competing civilization before its Singularity. This
may not make much sense for the universe as a whole, but it may make sense
if the problem needs to be considered one star system at a time.

5) There is no way to send out non-independently-thinking probes capable
of incorporating possible competing civilizations. This is certainly
reasonable from our perspective, but it must also hold true of all
possible minds that can be built by a superintelligence.

6) Competing fragmented entities will tend to fragment further, creating
a potential exponential problem if even a single fragmentation occurs.
Again, this must hold of minds-in-general.

Sounds like a pretty hostile picture of the universe...

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:19 MDT