Re: Economic (ignorance) Nativism and me

From: Emlyn (emlyn@one.net.au)
Date: Mon Mar 26 2001 - 18:12:53 MST


>
> Emlyn suggested a plausible route to get out of having the
> programmers have to assembly code the nanobots. I believe
> I can accept the argument with one caveat. Since nanobots
> will potentially be dangerous, you are going to have to
> verify the code after it is optimized.
>
> So I suspect you may need a combination of human and machine
> tools to verify that the optimization didn't create some
> nasty side effects that the simulation could never anticipate.
>
> Its the "outside" of the "normal" range of operations conditions
> that always get you. They found that out big time on the test
> flight of the Arianne 5.
>
> Robert

Agreed... a human sanity check on automated systems is good, provided that
there are actually parts of that task that a human can do better than a
machine.

It may be extraordinarily difficult for a human coder to follow what is
happening in a generated chunk of code. If the functionality is complex,
most likely it would be written in a highly structured and modularised way.
However, after the natural-selection optimisation process, the resultant
machine code would probably bear little resemblance to the original. What a
dirty job!

I am suspicious that we might be getting mostly very dirty jobs like this,
as humans in the process, in the future.

Emlyn



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:43 MDT