Neutrons and nanotech (was: NEWS: [deleted] can now sleep at night)

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sun Nov 25 2001 - 23:48:30 MST


"Robert J. Bradbury" wrote:
>
> Master Bradbury says, "Well we *know* that mature nanotechnology
> is very sensitive to radiation right?" Mr. Jones says "yes".

...no.

_Nanosystems_, arguing that nanotech is possible, assumes that a single
radiation failure knocks out a whole subassembly. This is the
conservative assumption, although I'm less sure about the conservativeness
of assuming that the resulting failure consists of the assembly going
offline, rather than beginning to produce distorted components. I've
often suspected that the first nanoassembler will not be automated but
will instead involve a hundred trained engineers as controllers, for
exactly this reason. Anyway...

Assuming that one radiation strike, anywhere, knocks out a nanobot, is an
EXTREMELY unconservative assumption in discussions of military
nanotechnology. You can't just move "conservative" assumptions from one
domain to another without checking whether they're still conservative. In
particulary, you can't use _Nanosystems_ design assumptions if you're
arguing for a *limitation* on an offensive nanotech capability because
_Nanosystems_ is trying to be conservative for a *possibility proof*.
(_Some Limits to Global Ecophagy_ contains some particularly blatant
violations of this principle, for example in discussing how much shielding
an aerovore requires.)

Molecules can be error-tolerant. Advanced biology is error-tolerant
because advanced biology is created by a long history of evolution, and
each individual evolutionary mutation (as opposed to recombination) is an
"error". Biology does not evolve except on substrates that involve
relatively smooth fitness landscapes, because that is a necessary past
characteristic of organisms that currently possess a long evolutionary
history. If there were ever organisms that didn't use error-tolerant
substrates, we don't see them today; they didn't evolve fast enough.

Biology clearly demonstrates that it is possible to build highly
error-tolerant organisms, albeit using proteins bound together by Van der
Waals forces instead of molecular bonds, which is why biological machinery
is so much weaker and larger than diamondoid machinery. But there is no
reason why a similar design principle could not be applied to diamondoid;
model the results of radiation strikes, then design radiation-tolerant
molecules. Similarly, larger assemblies can be designed which lack single
points of failure.

Just as biology necessarily solved the problem of error-tolerance as a
prerequisite of evolution, nanotech may need to solve the
single-point-of-failure problem as a prerequisite of debugging. If matter
is software, after all, then it needs to be debugged.

The computing industry has been putting off the problem of designing
computer programs which lack single points of failure for some time, since
today you can get away with just rebooting the computer. Companies that
wish to ensure reliability are still trying to build completely debugged
software rather than bug-resistant software.

We may need to move on to bug-resistant software before we can create
nanotechnology. I'm not sure I buy the idea that a gigaatom structure can
be completely debugged by a human-level intelligence.

> Though I've mentioned it only briefly in previous posts, scientists
> at Los Alamos have proposed methods for transmuting radioactive
> isotopes back into stable isotopes. What has been lacking is
> an effecient inexpensive separation technology (for the radioactive
> vs. non-radioactive atoms). Mature nanotechnology provides such
> separation capabilities.

Yes. Because mature nanotechnology is, or can easily be made,
radiation-resistant.

Evolution admittedly has a large head start. I suppose I could buy the
idea that the initial stages of nanotechnology will be more fragile than
biology, as long as we acknowledge that this is a temporary condition.
But open-air nanotech is not exactly an early stage.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:22 MDT