Re: FW: Singularity vs. Ronald Mcdonald

From: Stirling Westrup (
Date: Tue Jul 31 2001 - 20:12:34 MDT

Eugene Leitl wrote:

> On Tue, 31 Jul 2001, Stirling Westrup wrote:
> > I tend to agree with your counterarguments, but would just like to
> > point out how enormously difficult many proposed nanotech projects
> > would be without some extremely sophisticated controlling software. It
> > may even be the case that strong AI is a prerequisite to making many of
> > our nanotech visions come true.
> No. Darwin in machina will do nicely to breed control algorithmics.

I strongly doubt it. We would need to:

1) do everything in a secure environment so that failing control functions
would not have a negative impact on the environment. We could also do it
in a simulator, but that would involve a major slowdown of the simulation,
and we're gonna have to simulate an astronomical number of generations in
order to get results. (The more complex the behaviour you are trying to
generate, the longer the evolutionary search strategy has to run.)

2) Have a sophisticated enough fitness function that we can compare two
different blobs of junk and determine which had come closest to building a
cow. This will be extremely difficult to formulate, and will pretty much
require us to have a working hypothesis of how to build the target in the
first place. Thus, the fitness function is almost as hard to determine as
the control software.

3) Have a sophisticated enough monitoring system that we can actually
apply the control function. This is only needed if we don't go the full
simulation route. If we do need active monitoring, then it will, in itself
constitute a system needing extremely complex control software.

Writing a software system capable of deftly controlling a thousand real-
world machines to do a specific task is currently beyond our technology.
We will need control systems for swarms of hundreds of billions of
nanoprobes. I'm a big fan of alife and emergent technology, but I just
don't see us getting there by that method.

Some of the research currently being done on the design of emergent
'intelligence' by mimicking the kind of swarm signalling that happens in a
large termite hive *is* promising. (With millions of individuals, at least
it approaches the correct scale of integration.) But until we've done
enough work to know HOW the emergent behaviour in a hive emerges we have
no guarantee that designing a swarm for a given task won't require strong

Note that I'm not saying that strong AI *is* a prerequisite, just that I
would currently bet that it more likely to be a requirement than not.

This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:00 MDT