Re: Goo prophylaxis (was: Hanson antiproliferation method?)

Anders Sandberg (
Mon, 25 Aug 1997 15:47:24 +0200 (MET DST)

On Sun, 24 Aug 1997, Eliezer S. Yudkowsky wrote:

> Very true. The power of evolutionary computing lies in its blind speed.
> Let's say we have a million-organism brew that evolves for a thousand
> generations. Now let's say we have a hundred thousand programmers, each of
> whom chooses an organism out of the stew and redesigns it. The latter
> situation will proceed more slowly than the first, and be vastly more
> expensive, but it will also be far more powerful and capable of reaching
> loftier goals.

Well, that depends on how much redesign the programmers could do. If
you have ever looked at genetically evolved programs, you know they
are a mess. Structured programming? Bah! Sphagetti code? Not even
that - angel hair code! (slightly overcooked too...)

> Very, very true. A lot of people on this list seem to lack a deep-seated
> faith in the innate perversity of the universe. I shudder to think what would
> happen if they went up against a perverse Augmented human. Field mice under a
> lawn mower.

I don't think the universe is perverse. We just like to think it is,
since it takes the blame :-)

Perverse posthumans might be a problem, which is yet another reason
to learn everything we can about how to create healthy humans that
can get along, and the psychology of self-augmentation.

> Same here. It's immensely easier to destroy than create. You couldn't "hide"
> from a predatory nanite. You could slow it down, keep it from getting to your
> bunker, surround it with a continuous wall of nuclear flame, and then make
> your escape into space and blast the gooey Earth into bits... then try to
> rebuild civilization in the new asteroid belt.

Sigh. Who has read too much sf and seen too many movies now? You seem
to attribute the nanites with tremendous reproduction capabilities
and a very high level of intelligence, hell-bent on finding the last
survivors and killing them. Yes, such a nanoweapon could in principle
be created, but is it the realistic standard immune systems should be
measured against? In that case we obviously have to look out for the
heat-seeking, contagious-like-common-cold, fast-mutating retroviral
ebola viruses...

> "Who will guard the guardians?" - maybe nanotechnology would give us a perfect
> lie detector. Nanotechnology in everyone's hands would be just like giving
> every single human a complete set of "launch" buttons for the world's nuclear
> weapons. Like it or not, nanotechnology cannot be widely and freely
> distributed or it will end in holocaust. Nanotechnology will be controlled by
> a single entity or a small group... just as nuclear weapons are today.

It is this assumption I want to challenge. If it has the tremendous
destructive potential you assume, it is a fairly logical assumption.
But can you really back it up with some hard calculations?

You might be worrying about an imaginary ultra-danger, which will
suggest a course of action which is less than optimal but sounds
plausible Remember that we humans consistently overestimate the risks
of huge disasters and underestimate small, common disasters, and that
fear is the best way of making people flock to an "obvious solution",
especially if it is nicely authoritarian.

I think you are partially right, nanotech will be dangerous, but we
have to estimate the threat levels and what countermeasures that can
be created before we jump to conclusions about future politics. For
example, if decent immune systems can be created then the dishwasher
goo scenario is unlikely, and if relatively few have the twisted
genius and expertise to design Hollywood goo then it is a potential
danger but with a likeliehood of occuring that is low enough for some
planning to be done (like moving outwards, which ought to be feasible
at the assumed tech level). We need to get some estimates of these

Anders Sandberg Towards Ascension!
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y