Re: Neutrons and nanotech

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Tue Nov 27 2001 - 07:35:04 MST


On Mon, 26 Nov 2001, Eliezer S. Yudkowsky wrote:

> _Nanosystems_, arguing that nanotech is possible, assumes that a single
> radiation failure knocks out a whole subassembly.

Actually, I don't believe it says that (in Section 6.6). It does say
that he expects diamondoid to be more tolerant of radiation than proteins
because single radiation hits can break the protein backbone. Eric's
entire discussion is rather simplified because of extrapolations from
comparisons with proteins and the fact that he has to speak in
generalizations because he isn't discussing a specific nanopart design.

> Assuming that one radiation strike, anywhere, knocks out a nanobot,

I didn't assume that.

> (_Some Limits to Global Ecophagy_ contains some particularly blatant
> violations of this principle, for example in discussing how much shielding
> an aerovore requires.)

I'll leave this aside for the moment as it will require my digging
into the technical details.

> Molecules can be error-tolerant. Advanced biology is error-tolerant
> because advanced biology is created by a long history of evolution, and
> each individual evolutionary mutation (as opposed to recombination) is an
> "error".

Actually many molecules "aren't" very error tolerant. If you transmute
an atom from one element into another with most of the molecules from
which nanotech would generally be constructed (CNOHSPSi) you will change
its bonding properties. In many uses that significantly alters the
molecular structure.

The "error-tolerance" in biology is primarily due to the part
redundancy and the energy invested in recycling damaged parts.
In some cases repair is done, e.g. in yeast ~100 out of ~4000 proteins
are dedicated to DNA repair.

> If there were ever organisms that didn't use error-tolerant
> substrates, we don't see them today; they didn't evolve fast enough.

Which may be one reason that there is a "minimim size" for organisms.
If you are much smaller you don't have sufficient redundancy to
tolerate conditions which irreparably damage your recycling systems.
It may also be why there is redunancy in the genetic code. A
non-redundant code may not be able to evolve the repair machinery
necessary to maintain the code (at least with the Earth's background
radiation flux).

> But there is no reason why a similar design principle could not be
> applied to diamondoid; model the results of radiation strikes,
> then design radiation-tolerant molecules. Similarly, larger assemblies
> can be designed which lack single points of failure.

No argument, in fact Eric says that redundancy is *required*.
But there is a significant difference between the amount of
redundancy required for the normal background radiation flux
(which has a low neutron flux component) and the redundancy
required to protect from moderately high doses of neutron beams.
If you have to add lots of redundancy (or use lots of energy for
recycling damaged parts) then it is going to have size or energy
signatures much larger than a "typical" nanorobot.

One tailors the radiation dose to provide sufficient certainty
that you have *really* killed whatever nanomachinery might be
contained in a sample. We do it now with food irradiation,
its just that you need a moderately large radiation source
to utilize it.

> We may need to move on to bug-resistant software before we can create
> nanotechnology. I'm not sure I buy the idea that a gigaatom structure can
> be completely debugged by a human-level intelligence.
 
Depends on its complexity. If its a monolithic structure I'd agree.
If its composed of a number of much smaller parts I think humans
may be able to deal with it. Eric & Ralph's parts to date have
been in the multi-thousand atom range. So we are clearly up to
the task at that level.

> Yes. Because mature nanotechnology is, or can easily be made,
> radiation-resistant.

Mature nanotechnology can only be made radiation-resistant up
to some "specified" level of radiation, just as it can only remain
heat resistant up to some maximum temperature. If you want a more
"secure" environment, you would presumably use higher neutron beam fluxes.
If you want a *really* secure environment you incinerate all incoming
materials in a plasma torch. Of course that requires somewhat more
energy and you have to reassemble the molecules afterwards -- but
one *is* "safe".

High dose radiation *will* make more radioisotopes in the irradiated
material. Mature nanotechnology solves this by being able to weigh
atoms to 1-Dalton accuracy (Nanomedicine, Section 4.4.3) and because one
will have the energy and assembly abilities to rapidly disassemble
and reassemble nanoscales that have been damaged by weighing the
most highly radioactive atoms (or molecules).

> Evolution admittedly has a large head start. I suppose I could buy the
> idea that the initial stages of nanotechnology will be more fragile than
> biology,

Yep, unless you build in both disassemblers and reassemblers,
early nanorobots will only be useful for specific lifetimes.

> as long as we acknowledge that this is a temporary condition.
> But open-air nanotech is not exactly an early stage.

Agreed.
 
Robert



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:22 MDT