Re: Goo prophylaxis

Nicholas Bostrom (
Fri, 29 Aug 1997 14:29:59 +0000

Eric Watt Forste wrote:

> Nicholas Bostrom writes:
> > i.e. it wants to organize as much matter as possible in the way
> > that it think has most value. Except for possible strategic or
> > ethical reasons, it makes no difference whether the matter is virgin
> > territory or some other computer is has already organized the matter
> > in a sub-optimal way.
> This would presume that there is a decision procedure for
> optimality, would it not? I have yet to run across any such
> decision procedure; if you've got one, I'd be fascinated to
> hear more about it.

I'm not sure what you mean by a decision procedure for optimality.
The values themselves I take as givens. They might be design in by
its constructor or they may result from some accidental occurence; I
don't assume that they can be deduced from some self-evident axioms
or something.

Given the values, it may or may not be trivial to translate them into
micro-level descriptions of the value-optimal physical state. That
depends on what the values are. But whatever they are, we can be
pretty sure they are more likely to be made real by a
superinelligence who holds those values than by one who doesn't.
(Unless the values intrinsically involve, say, respect for
independent individuals or such ethical stuff.) The superintelligence
realizes this and decides to junk the other computers in the
universe, if it can, since they are in the way when optimising the
SI's values.

Nicholas Bostrom