Re: Goo prophylaxis

Eric Watt Forste (
Sat, 30 Aug 1997 14:29:45 -0700

I wrote:
> > In order for
> > molecular nanotechnology to present a serious military threat to
> > existing soldiery, highly optimized designs would have to be
> > developed.

Nicholas Bostrom asks:
> Why?

Much of the cruft and "bad design" that people complain about
in biological systems, at least at the nano level, seems to me
to be rather intricate defenses against hostile life,
especially viruses. The "junk DNA" that gets spliced out of
mRNA by the spliceosomes in the course of transcription, for
instance. It has been hypothesized that the spliceosomes
developed as a defense against retroviruses, which are capable of
inserting their own DNA into the host germ line.

I'm starting to suspect that (at the nano level, not at the
macroscopic mechanical level) life is highly optimized for survival
under hostile conditions (where the source of the hostility is
mostly competing life forms). The incredible dice-rolling device
that generates the DNA that codes for antibodies, etc. The metabolic
thermodynamic efficiencies, as has been noted elsewhere, are also
rather better than human beings can usually attain with designed
mechanical devices.

Biological systems often have a tradeoff between toughness and
fecundity. Some species are fragile but extremely fecund, like
mayflies and corals/medusas. The species that are tough (like human
beings) are rarely prolific. The big fear about molecular
nanotechnology is the self-reproducing aspects, the fear of a
highly prolific system. I think that design tradeoffs would
make the first generation of highly prolific nanotech be either
fragile or extremely energy-inefficient, or both.

I would expect that we'll have considerable experience in
dealing with prolific nanotech, with energy-efficient nanotech,
and with tough nanotech long before we ever have to face off
against nanotech that has all three of these properties.

> And even if that is the case, don't forget that the step from rather
> optimized designs to highly optimized designs might be fairly quick,
> and if that's the step that creates the big military potential,
> then...

This speculation usually rests on the idea that we will, in
parallel, have developed AIs or easily replicable uploads that
will be able to do good engineering work far more rapidly than
present-day human beings can. Otherwise, I think Carl
Feynman's estimates that the capabilities of nanotechnology
will unfold over the course of several "technology generations"
(and not in one fell swoop) are on the mark.

I doubt that uploads will be possible without molecular nanotechnology
for use as probes to determine the structure and processes of a
living brain in order to accurately simulate it. So I don't think
we can rely on uploads to speed up nanotech development and
substantiate your fears. We'll have plenty of experience with
nanotech (and time to develop defenses) before we have any uploads
to accelerate the development of nanotech.

That leaves non-upload AI, which I grow increasingly skeptical
about. (This could change any month, though.) Minksy and Lenat
and a lot of other people who know a lot more than I do are probably
going to hate my saying this, but the most plausible course of
attack in this field right now looks to me like playing with feedback
nets using a bunch of different unsupervised-learning algorithms.
I am pretty sure that most such systems that have been built have
produced no behavior more interesting than catatonia, but I would
expect that if non-upload AI has a future, this is where it's going
to come from. (I'm not ready to lay any money down on this prediction,

The problem, as Minksy would probably point out, is that such
systems are intractable. Well, there are lots of systems that used
to be intractable that aren't anymore. Heart surgery was an
intractable proposition five hundred years ago. I don't know what
it would take to make working with those sorts of systems intellectually
tractable, but I'll probably spend a little while thinking about
it. I'm sure I won't be alone in this.

What it comes down to for me is that a lot of our intuitions about
the possibilities in future technology is based on our common
understanding (common among extropians, at least) that we are
physical beings, that our very bodies are existence proofs for
nanotechnology and that our brains are existence proofs for
physically-embodied intelligence.

I would expect the first physically-embodied artificial intelligence
to follow the "intractable" patterns of the existence proof. There
is no existence proof for physically-embodied artificial intelligence
based on an abacus architecture.

Eric Watt Forste ++ ++ expectation foils perception -pcd