Freitas on Gray Goo

From: hal@finney.org
Date: Sat May 13 2000 - 21:30:26 MDT


Robert Bradbury pointed to http://www.foresight.org/NanoRev/Ecophagy.html,
a paper by Robert Freitas, author of Nanomedicine, on the gray goo
problem.

I've looked this paper over and I have a number of concerns. Before I
get into that I should thank the author for providing hard numbers in a
debate which has too often been about generalities. Also I appreciate
Robert Bradbury for making the paper available in html form.

A minor criticism first: the title of the paper seems unnecessarily
opaque: "Some Limits to Global Ecophagy by Biovorous Nanoreplicators,
with Public Policy Recommendations." Here we have two unfamiliar terms,
"ecophagy" and "biovorous". As far as I know these are coined terms
which you will not find in the dictionary or the technical literature
(they do seem to have been used in some science fiction computer games).
While the meaning of these words can be deduced from the root forms,
and "ecophagy" is defined at least indirectly in the introduction, it
leaves me with the impression that jargon is being used unnecessarily
to add political impact.

I've heard of Drexler gloatingly putting up slide after slide of dense
equations from Nanosystems, to tweak his earlier critics who complained
that Engines lacked technical depth. I can just imagine the glee with
which Foresight presentations will quote the mouthful of a title with
which this paper has been burdened.

Now to the meat of the paper. The thrust is that gray goo is not a problem,
because we will have ample warning before the goo is close to taking
over the biosphere. The thermal limits seem to be the ones considered
most important. The abstract reads:

     The maximum rate of global ecophagy by biovorous self-replicating
     nanorobots is fundamentally restricted by the replicative strategy
     employed; by the maximum dispersal velocity of mobile replicators;
     by operational energy and chemical element requirements; by the
     homeostatic resistance of biological ecologies to ecophagy; by
     ecophagic thermal pollution limits (ETPL); and most importantly
     by our determination and readiness to stop them. Assuming current
     and foreseeable energy-dissipative designs requiring ~100 MJ/kg
     for chemical transformations (most likely for biovorous systems),
     ecophagy that proceeds slowly enough to add ~4°C to global
     warming (near the current threshold for immediate climatological
     detection) will require ~20 months to run to completion; faster
     ecophagic devices run hotter, allowing quicker detection by policing
     authorities. All ecophagic scenarios examined appear to permit early
     detection by vigilant monitoring, thus enabling rapid deployment
     of effective defensive instrumentalities.

I have several problems here. First, the strategic scenario in which
this problem is being discussed is not made clear. The assumption in
the 20 month scenario is that the goo wants to avoid being detected.
Well, that is absurd. It will only take 20 months if the goo expects
to take over the biosphere without anyone noticing! Don't you think
someone would notice? Hey, we just lost New York. Naw, couldn't be,
the global temperature hasn't gone up more than four degrees.

Obviously the goo can't help being detected at some point. The 20 month
scenario just doesn't make sense.

A more plausible scenario is described where the gray goo grows somewhat
faster and the temperature is allowed to rise more: "For example, taking
t = 100 sec, TEarth = 300°K, and Ediss ~ 100 MJ/kg, the transition to
the ETPL regime occurs when total global nanomass reaches ~5 ×1010 kg,
or only 0.001% of total global biomass, and the last ~17 population
doublings remain to be completed over a time span of ~2 tlast = 2×10^7
sec (~7 months)." 7 months still sounds like quite a bit of warning,
although not as generous as 20 months.

However even with this scenario, it is sensitive to the efficiency of
the goo. These two scenarios are assuming dissipation of 100 MJ/kg
for the conversion. This is justified with "Drexler [4] estimates
that the typical energy dissipation caused by chemical transformations
involving carbon-rich materials will be Ediss = (q Dbio) ~ 100 MJ/kg of
final product using readily-envisioned irreversible methods in systems
where low energy dissipation is not a primary design objective." Well,
we're assuming the goo is attacking using a stealth method where low
energy dissipation is an important design objective. Hence Drexler's
estimate is not very relevant.

The paper also suggests that the 100 MJ/kg estimate is appropriate
because highly dissipative designs are easier to produce and, given the
difficulty of the problem, these are the kinds of gray goo threats most
likely to be faced during the early and intermediate years of nanotech.
Even if true, the paper has not previously stated that its estimate was
only meant to apply to immature nanotech systems.

Even terrestrial vegetation is quoted as being able to do better than
this figure, dissipating 38 MJ/kg. The paper is assuming that nanotech
will be less capable than biology. That is not consistent with the usual
conservative design assumptions for nanotech. Drexler and Merkle think
that 0.1 MJ/kg is possible in theory.

If we assume 10 MJ/kg, modestly better than the biosphere, the 7 month
scenario shrinks by a factor of 10 to about 3 weeks, much less time for
defense. And if we ever did reach Drexler's optimum design, things would
be over in minutes (as little as 76 seconds from single nanite to total
conversion of world biomass, according to one of the more extravagant
extrapolations in the paper).

Another problem is that the paper's emphasis is on detection. In several
places it seems to assume that this is the hard part of the problem and
that actual defense is relatively easy, which is certainly counter to
the conventional wisdom:

"All ecophagic scenarios examined appear to permit early detection
by vigilant monitoring, thus enabling rapid deployment of effective
defensive instrumentalities."

"Constant ecological surveillance for any evidence of ecophagic activity
is an appropriate policing measure to provide adequate early warning to
the existence of this threat."

"...giving still more time for defensive instrumentalities to be brought
to bear on the situation."

"Four related scenarios which may lead indirectly to global ecophagy
have been identified and are described below. In all cases, early
detection appears feasible with advance preparation, and adequate
defenses are readily conceived using molecular nanotechnologies of
comparable sophistication."

The last scenario considered goes into the most detail in terms of
the strategic issues, laying out the battle lines for a "goodbot vs
badbot" war, Drexler's defensive shield idea fleshed out in more detail.
Three reasons are given for why the goodbots have an advantage:

   1.Preparation -- defensive agencies can manufacture and position in
     advance overwhelming quantities of (ideally, non-self-replicating)
     defensive instrumentalities, e.g., goodbots, which can immediately
     be deployed at the first sign of trouble, with minimal additional
     risk to the environment;

   2.Efficiency -- while badbots must simultaneously replicate and defend
     themselves against attack (either actively or by maintaining
     stealth), goodbots may concentrate exclusively on attacking badbots
     (e.g., because of their large numerical superiority in an early
     deployment) and thus enjoy lower operational overhead and higher
     efficiency in achieving their purpose, all else equal; and

   3.Leverage -- in terms of materials, energy, time and sophistication,
     fewer resources are generally required to confine, disable, or
     destroy a complex machine than are required to build or replicate the
     same complex machine from scratch (e.g., one small bomb can destroy a
     large bomb-making factory; one small missile can sink a large ship).

It's ironic that the standard argument for gray goo advantage, that it can
be destructive while the defenders must preserve information, is turned
on its head here. Now it is the gray goo which is struggling to survive
and replicate, while the defenders with their numerical superiority can
lay waste if necessary in order to destroy an infestation.

I'm not the best person to judge the adequacy of these assumptions.
We have had extensive discussion here, war-gaming the gray goo scenario,
and I think the presentation in the paper is tremendously over-simplified.
To the extent that we are lead to conclude that gray goo (excuse me,
global ecophagy by biovorous nanoreplicators) is not a serious problem
based on the paper's analysis, I think it is highly misleading.

Here is what the active shield involves:

   Second, consider the defense of the entire eukaryotic
   biosphere. Excluding bacteria assumed to represent about half of global
   biomass and assuming an average eukaryotic cell size of 20 microns,
   there are ~3×10^26 eukaryotic cells on Earth. If each cell is visited
   and examined, on average, about once a year with time spent per cell
   ctime = 100 sec/cell as before, this implies a global examination
   rate of Xcell ~ 10^19 cells/sec and a requirement for Xcellctime ~
   10^21 cell-monitoring nanorobots, representing a total worldwide
   nanomachine volume of ~1000 m^3 of 1-micron nanorobots consuming
   ~10 GW (~0.1% total current human global power generation) assuming
   ~10 pW/device. In this surveillance regime, a ~1 mg infestation of
   1-micron badbots in a 3 meter wide, 30 meter tall redwood tree (fn ~
   10^-11) is first detected in ~100 millisec -- again, triggering a
   prompt corrective response.

So here we have nanotech capable of visiting and entering every eukaryotic
cell in the world and scanning it for evil nanobots, plus an automated
war-fighting response capable of very quickly and safely "correcting"
the problem, hopefully without wiping out every human within sight.

This is not an early or intermediate level of nanotech development.
It would be among the most sophisticated nanotech applications imaginable.
By the time such a global system could be designed, developed and put
into play, gray goo could have wiped out the world ten times over.
There seems to be a fundamental mismatch between the sophistication of
the goodbots, who run an active immune system that checks every cell
on the planet, and the badbots, who can't manage to operate even as
efficiently as green plants.

In summary, Robert Freitas does a good job in the quantitative analysis
of the limitations faced by gray goo. But the conclusion that the gray
goo problem is readily dealt with is much less convincing. The paper
would be better if it just focused on the numbers, and left the strategic
analysis to another day.

It also should be much more objective about the seriousness of the gray
goo threat. Foresight seems to have made a political decision to downplay
gray goo in the last several years, and this paper unfortunately seems
to be consistent with that political position. Much more work needs
to be done before we have a clear picture of the true scope of the gray
goo threat. Robert Freitas has made an important contribution, but we
are not yet in position to settle the matter.

Hal Finney



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:12 MDT