On Thu, 15 Mar 2001, Nick Bostrom, commenting on my comments wrote:
> is the only way of preventing a black goo disaster.
Actually, I think the standard term is "gray goo", you need to search
Engines of Creation or Robert F.'s Ecophagy paper.
> Another is that we might understand that this degree of coordination
> is necessary to avoid Robin's Burning of the Cosmic Commons scenario.
But from my perspective (and I know we haven't resolved this debate),
there is no point to interstellar colonization because you get no
benefit from it.
> A third reason is that if there is a singularity then the transcending
> power might well get enough power to become a singleton.
Yes, I think this is a variant of the runaway first upload (which
may be part of Robin's "If Uploads Come First", I can't remember.
> This requires a rather strong convergence-hypothesis
> ("all advanced civilization evolve in the same direction"), but it is real
> possibility in my opinion, and indeed is the scenario that we should hope
> is correct.
I'm pretty sure that we all do evolve in the same direction that
gets very close to the limits of known physical laws. However
because of the huge variants in starting conditions (# planets,
element abundances, star size, history of encounters as ones
system travels through the galaxy, disruptions caused by galactic
collisions, etc. we may all end up with somewhat different final
configurations (we know for example Dyson Nets, JBrain'ed systems,
Matrioshka Brains, Ander's more fanciful end-points that require
sub-atomic engineering, etc.). I think convergent evolution is
going to drive entities into specific architectures that are
optimal for specific purposes. But I don't see anything that
suggests you don't get one type of civilization/architecture
in which ancestor simulations are allowed and another type
of civilization/architecture in which they are not.
Now, there may be a general rule that says that running sub-SI
simulations (at least at our level) produces no-net gain in knowledge
and is a waste of computronium and energy. Or there may
be a general rule that says that once you have fully optimized
all the local matter and energy all you can do in terms of
allowing evolution to continue is run sub-SI simulations in
which virtual evolution occurs.
So either of those seem possible to me but I don't see how to
decide between them.
> It would be a fundamental mistake to think of simulations as not fully
But they aren't *real* to me! Humanity has got a huge range of
standards regarding what can be morally justified in terms of
the treatment of other sub-"conscious" entities. Look at eating
beef, or eating dog-meat or eliminating cochroaches, etc.
To an SI we are *way* below the cockroach level in terms of
> If the beings in the simulations are conscious then their well-being is as
> ethically important as that of those who are implemented directly in
> biological brains in the basement universe.
Hmmmm, this seems to be the "conscious" = valuable, "non-conscious"
equals not valuable (e.g. a binary state). I'm fairly sure that
many neuroscientists would argue there is a consciousness scale
going from the lowest to the highest animals. (This goes back to those
animals that we can perceive recognizing themselves in the mirror.)
There is probably also a "pain" scale as well. Say the SI evolved
from a culture that valued information content over self-awareness.
In that situation, you eliminate the sub-SI simulations as soon as
they have reached the end of their useful lifetime and can be
reallocated to something that generates a greater amount of information.
> You wouldn't be allowed to think certain thoughts for the same reason that
> a human is not allowed to have a child and maltreat it. Thinking is not a
> private matter when it directly affects somebody else.
So I can't just go thinking my girlfriend-de-jour into the replicator
for a fun night on the town and then recycle her atoms the next morning???
That just sucks!
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:40 MDT