Re: Nanotechnology

Damien Broderick (damien@ariel.ucs.unimelb.edu.au)
Thu, 10 Oct 1996 01:50:35 +1000


Hi people

Just a few short, quick ripostes to John Clark's illuminating response:

> >Damien quoting Drexler: `If a car were assembled from

> >normal-sized robots from a thousand pieces

>I re-read that passage in Drexler's popular book, it seems to me that he was
>trying to demonstrate to the general reader how small an atom was and how
>many it would take to build a car, about 10^30. [snip]
>
>On page 60 of "Engines of Creation" he talks about building (growing?) a
>rocket engine in one step using general purpose assemblers. [much more snip]

Yeah, but look at all the other stuff in that section (pp. 55-62).
Drexler starts by making a comparison (the basis, I believe, of his
continuing analogy) with a primitive bulk assembler that makes
prefabricated chips and sticks them together, etc.

A sheet nano assembler arm (p. 59) cements 10^6 atoms a second. Wow! But:
`making a meter thick slab will take over a year'. And going quicker makes
everything get hot fast. So: prefab at leisure and stockpile. `Molecular
assemblers will team up with larger assemblers to build big things quickly'.
This fits in with my version of the hierarchical method for nano-making a
car. His spaceship engine, admittedly, takes less than a day, but I gather
that the fabricator uses stockpiled carbon fiber and sapphired aluminium
oxide. (And yes, probably what Drexler was doing in that cited passage is
to educate those who can't instantly grasp the order-of-magnitude gap
between atoms and cars. I'm one of them.)

What interests me is the number of gestures at programming, instruction, etc
in these pages. There's the `seed' you mention, a nice device but chockful
of knowledge that has been put there by prodigious human effort, I gather.
Yes, maybe some of what it sketches is data gathered by `disassemblers', but
as you point out this data has to be shrunk by compression routines. (At
this point I admit what must be obvious, that I don't know what I'm talking
about: maybe this can be turned over to a no-brain number cruncher. But
maybe it can't.)

Here's a citation I think might be salutary, from Kevin Kelly, not known for
his techno-terror, in his

Out of Control: The New Biology of machines, London: Fourth Estate, 1994:

`...turn on a switch, and a linear system awakens. It's ready to serve you.
If it stalls, restart it... But complex swarm systems with rich hierarchies
take time to boot up. The more complex, the longer it takes to warm up.
Each hierarchical level has to settle down; lateral causes have to slosh
around and come to rest; a million autonomous agents have to acquaint
themselves. I think this will be the hardest lesson for humans to learn:
that organic complexity will entail organic time.' (p. 24)

> >That [DNA] sketchy information gets unpacked via (1) a
rich
> >information-dense environment

>But that's no different than Nanotechnology because software is always
>useless without a computer to run it on.

And what I meant wasn't that simple. I'm thinking of the kind of elaborate
and unpredictable informational richness of the environment that goes into
unpacking a genome, which Jacques Monod spoke of a third of a century ago in
CHANCE AND NECESSITY. This info density is available to nano engineers as
well, of course, but it's been explored and exploited by living thangs via a
truly Vast culled-random-walk. Despise bio-evolution if you will, there's
something to be said for letting Vast numbers (Dennett-speak) of variants do
the walking through design space...

> >How many atoms was that again? How much memory do you have

> >in your hard drive?

>In section 12.7 of Nanosystems Drexler shows how to store 10^25 bits per cubic
>meter in fast RAM, or 10^28 bits in a slower (but still fast by our standards)
>"tape" storage. It should also be noted that the arm on a Nanotechnology
>based Assembler would be 50 million times shorter than a human arm, and that
>means it could move back and forth 50 million times faster than a human arm

Is this consistent with the EofC quote above?

> >The link from impossibly complex algorithms generated by

> >such means,

>If such algorithms exist then they are not impossible.

Sorry, my rhetorical skid: *hideously* complex? *dauntingly* complex?

> >nano fabrication, will very quickly escape our understanding
>
>If by "our" you mean 3 pound brains made of meat then I agree, but again, you
>don't need to understand why something works to manufacture it.

No, you don't, but if it's likely to goo you it's nice to have a clue.

>Besides, I don't think 3 pounds of meat will be the smartest thing around
forever.

Again, there's this tendency to slide from `we won't always be top kid on
the block' to `>AI is just around the corner'. Maybe it is, but I think
rudimentary nanofab might be sooner. On the other hand, one thing I've
learned through 33 years of earning my living as a science fiction writer
(learned in principle, at any rate; it's horribly hard to put into practice)
is the monstrously interconnected complexity of change, which makes novelty
easier and faster (nano *and* AI will emerge at the same time, yes, and
boot-strap each other).

>If you made up a list that contained the type and position of every atom in a
>car this list would contain a HUGE amount of redundancy. [Just use] the same
>sort of algorithms we use today for data compression in ZIP
>and GIF files.

Easily said. But does this work with something of the order of 10^30 atoms,
*each scanned individually*? (Might be easier after all to build from
scratch: `put an iron atom next to this iron atom. Do 10^15 times, then
turn right'. Hmm. Big heavy pots are easy, steaks might be tricky to program.)

As I say, just some stray thoughts. Are we going around in circles here?
(I hope not.)

Best, Damien Broderick