Bryan Moss writes:
> So, wait, I'm confused, are we *looking* for the x86 machine code mutator or
> are we *running* it to see what wonders emerge from the swamp? If it's the
It depends. If we can find (nobody knows how difficult this is, a distributed.net/bochs project with a some 10-100 k participants can give us at least a slight idea) a good enough machine code mutator prior to the release of the worm to end all worms, the more power to us. If we can't, the combined power of the Net might find it for us as a side effect. Such a mutation function would very very valuable indeed. It would also help us to get rid of brittleware.
(Even as just a killer demo, a nonmutating Wintel variant of the Morris worm will make a lot of people listen up if it takes out 80% of the net in a single afternoon. Software diversity will be increased and code review procedures will be made much more stringent in the aftermath (I don't think http://www-ccs.cs.umass.edu/~shri/iPic.html IP stack implementation has a lot of holes). We would ratchet up global system security quite a bit, for the equivalent of a bloody nose).
> latter then are you proposing we send out a GA-based worm to test security
> by an ever evolving onslaught of system cracking? Which, and stop me if I'm
It would be a good idea to supply the worm with a fortification functionality (making it the equivalent of a symbiont), patching the ingress hole & thus protecting the host from infections of other worms. Finding the hole is not equivalent to fixing it, however.
OSses should learn to mutate, too. Of course this might remove needed functionality, however systems don't persist in metastable regimes forever.
> getting this all wrong, would result in the coevolution of better
> worm-detectors and better worms until some sort of security equilibrium was
> established thus making the internet safe for future generations? This
> reminds me of the proposal (also on this list) to genetically engineer a new
> predator to keep mankind from slouching.
Opcode blocks evolve in micro and milliseconds, human generations take decades. If somebody's machine crashes it is not nearly as dramatic as death of a person. Trying to fix flesh which is probably going to be obsolete in less than 100 years? Makes no sense.
Such a project _should_ be tested in a large safe sandbox first, if possible. However, currently no one seems to pursue such a project, and meanwhile time is running out. In the early 80's nobody would have noticed if ARPAnet went down for a week, in 2010 a global network crash will be very serious business (pun intended) indeed. What will happen in 2020? 2030?
I say just go for it, whether sandbox or no.