> In discussing nanocomputer programming, I had said that they would
> probably be programmed in assembly language.
> On Mon, 26 Mar 2001, Emlyn wrote:
> > Why assembly? I'd be really interested to know why you'd say that. Just
> > space constraints?
> In large part. You are going to be constrained on memory *and*
> processing power. You are going to want to do as much as possible
> with as little as possible. Nanobots have to be small (to go crawling
> around inside cells). That puts strong limits on things unless you
> teleoperate them from larger 'bots in circulation. But there
> are limits on communication, interference from nearby 'bots, etc.
> that make this problematic. You may be heat density limited on the
> number of 'bots you can operate at the site of an infection
> (so they need to be as CPU cycle efficient as possible).
> In the 'real' world, you have similar problems -- if you want the
> simulations to run as fast as possible, the nodes have to be as
> close as possible and they have to generate as little heat
> per unit work done as possible. That means you don't want
> 'wasted' instructions in the execution stream.
OK, why would we use handcoded assembly language? I think there is a flaw in
the implicit reasoning about this stuff... you are using the analogy of the
early days of computing, when everything about computers was minimal except
for the price tag.
The difference with early nanotech is that it will occur in an environment
of *massive* computing power. Utterly incredible computing power (Hey, that
could be right now).
So your nanobots may be severely constrained in number of instructions,
amount of memory, and maybe processing speed?
So, what you do is to build a simulation of the nanobots + environment, in a
very powerful computing environment (use a couple of blue-genes that are
lying around?), and develop a high level language in which you can write the
code for your bots. The high level language can be run in the sim directly
as if it were running on the nanobot, and used to control the bot.
The high level language is first compiled to BotOpCode, and optimised using
standard techniques. This is probably still a bit bloated (I imagine this
would be your conjecture, and the reason that you have said we'd be using
Next, you build a genetic programming environment with which to create your
tight machine code. You use your bloated compilation result as the initial
gene set (probably with a few variations, optimised code, unoptimised code,
randome sequences, anything else to fill out the population).
You use the simulation on one super fast machine as the fitness function.
Each gene set (a program) can be run in the sim, and rated as to how closely
it conforms to the operation of the original high-level language program.
This gives a rating of fitness. Also combined into the fitness rating is the
length of the machine code, and the amount of resources the program requires
to run; the smaller, the better. Probably, you wouldn't allow a program that
doesn't conform tightly to the behaviour of the sim to have a fitness better
than a certain level, so that only correct programs would get a better
fitness; above that level, optimisation requirements would be the
Then, you use some kind of sexual recombination to mix up the programs and
promote them to the next round, dependent on the fitness results (how they
did in the sim). You might pop in some mutation if you think there's not
enough variation in the original gene pool, but that's mostly optional.
Lather, rinse, repeat. By this method, you should evolve very tight,
resource friendly routines. Of course, you only accept a result that has a
fitness above the correctness threshold, as specified above.
There have been some excellent results in this area - I remember reading
that the tightest bubblesort algorithm (was it bubblesort?) ever written had
61 instructions, and that a machine had evolved a version with 62 operations
using the a method similar to that described above.
I bet that, if you had a few good generic sim packages around the traps, so
nanobot architects could sim their creations without coding the sim from
scratch, then you would be able to evolve tight code a *lot* more quickly
than a super coder could write it.
I bring all this up, because I don't think that human coding time is getting any cheaper, or that the doing of it is getting any faster... and to cut assembly language for early nanobots, you are going to need to be a really good coder. To do it quickly, you'd need to be a god. Set against a backdrop of highly available cheap powerful computing, and very good sim and evolutionary techniques (how, after all, do we get to nanobots in the first place?), it seems odd that handcoded assembly would be the standard in botcode.
Emlyn James O'Regan - Managing Director Wizards of AU http://www.WizardsOfAU.com emlyn@WizardsOfAU.com "Australian IT Wizards - US Technology Leaders Pure International Teleworking in the Global Economy"
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:43 MDT