No AI for Nano/No Nano for copyloads [was Re: No nanotech before AI]

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Thu Jul 13 2000 - 04:13:50 MDT


On Thu, 13 Jul 2000, David Blenkinsop wrote:

> Jumping in here, has anyone really defined for certain what a useful
> "AI" would be, and how it would differ, exactly, from any other piece of
> software?

You do *not* need classical AI. Someone (at the Foresight or Contact
Conference???) observed that the interesting thing about AI is that
once you figure out how to do something using an algorithm or collection
of ad-hoc knowledge and decisions people stop calling it "AI".
Remember when you had to have "AI" to play chess or drive a car?

What you need is matter compilers and design verifcation tools and a huge
library of molecular "parts". Think of every type of mechanical thingy
in the macro-scale world (wheels, gears, beams, axles, pipes, pumps,
motors, conductors, lights, speakers, etc.) and now imagine all of those
constructed out of a few hundred thousand to a few million atoms.
That is why I think Nano@home using random atomic fill-in with human
semi-intelligent (human?) assisted "selection" is an interesting way to go.

If you have all the parts pre-designed and tested, then a pretty decent
"Hardware-Description-Language" and some tools similar to those now
used in the semiconductor industry would let you ramp Nano pretty fast.

However, considering how long it took the semiconductor industry to
develop these things, you can see it isn't going to happen quickly
unless there is some breakthrough in software development times.

(As an aside, people always gripe about how long it takes to write
softare -- if you view what we are doing as a process of replicating
what nature has done (creating intelligent self-conscious entities)
then we are doing it about a million times faster.)

> Nano theorists and nano doubters alike seem to enjoy talking
> about AI's, but maybe all you really need is just some very fancy
> software to do the "automated molecular skyscrapers" from simple feed
> stock material.

I agree with this. The only somewhat fuzzy AI required in my mind
is the part where you need to verify that someone isn't "sneaking"
through parts in separate designs, that can be subsequently harvested
and used to build something dangerous. I think Eric has suggested
that at least some designs may need to assembled in such a way that
they break themselves or meltdown if attempts are made to disassemble them.

> Really, though, is there an implicit assumption that "compiler" software
> is at a standstill -- so therefore you need a "Max Headroom" AI before
> you can successfully automate some range of constructions?

No, I think anyone who thinks Nanotech requires AI hasn't thought it
through very well. Biotech is a limited subset of Nanotech and we have
the genetic programs (operating systems) and are slowly working our way
through understanding the complete toolkits bio-organisms use for
self-construction and we haven't had to use very much "A" in the "I".

>
> > From Robin (???):
> > I disagree that you need advanced nanotech to upload. It all depends on
> > just how detailed information you need on neurons and synapses. And it
> > could well be worth spending billions of dollars to scan just one brain,
> > given how much money could be made from copies of that one brain. We didn't
> > need advanced nanotech to read the human genome -- because it was worth
> > enough to read just one genome.

> Er, wait a minute! You're going to get detailed info on about a
> quadrillion brain synapses, and you're going to do that without advanced
> nanotech?

You can imagine a destructive readout approach with nothing more than
extremely very fine tissue slicers combined with near field microscopy
or electron microscopy that can "read" all of the synaptic interconnect
pathways in the brain. If this encodes all of our "intelligence", then
this will let you reproduce yourself. In my mind this isn't an "upload"
but a "copyload". If however there is interconnect or synapse weighting
information in the density and type of neurotransmitters in the synapses
then the readout is going to require the application of a lot of antibodies
or other neurotransmitter identifiction & quantification molecules and
that is probably beyond current technology. It might be feasible in
ten or so years once we have X-ray/NMR based models for all of the molecules
we need to "feel" and fancy computers like Blue Gene that would let
us accurately design complementary sensing molecules.

My guess though is that that type of readout would be quite slow and
very expensive.

Regarding the quantity of information. You have ~60 Billion neurons,
so 6x10^10 * 10^3 synapses/neuron = 6x10^13 synaptic connections
(10 trillion). Assume you need source & destination address
information (~40 bits each unless you do some interesting local
address compression) and some strength of connection information (4 bits?)
and that works out to 5 Terabytes of information just to store the
brain map -- without even trying to execute the simulation.

If that is accurate, you will be able to store it on a single disk
drive by 2007 (assuming current density doubling/year continues).
Getting it into main memory (processor in memory) will probably be
circa 2010-2015. Even then it is going to be a pretty sizable computer.

> Suddenly, I envisage the subject for this with a whole forest of
> chemical analysis pipettes sticking out of his skull (to say the least)!
> As you indicate, one might be able to skip *some* of that "quadrillion
> synapses". But, even if you need, say "only a trillion" sensors, how do
> you do that with any ordinary tech?

I think Robin is a fan of "copyload" even if destructive readout is
required (Robin please correct me if that is an incorrect assumption).
(You have to remember, he's the author of the theory that the person
who gets to the playhouse first gets to monopolize all the toys... :-)).

I probably prefer more of a gradual evolutionary "upload" process over many
years. I'd bet David prefers this as well. The "copyload" process can probably
be done without "real" nanotech. The "evoload" process probably requires
long term, real time monitoring and that would require either *very*
advanced biotech or real med-nano-tech.

Robert



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:34:32 MDT