Re: SITE: Coding a Transhuman AI 2.0a

From: Matt Gingell (mjg223@is7.nyu.edu)
Date: Mon May 22 2000 - 21:32:35 MDT


Eugene Leitl wrote:

> We don't know how to write programs automatically in a robust
> fashionyet, however. So, the first problem would seem to mutate a
> population of mutation functions in a machine language (say, x86),
> screening the result for obvious nonviability (memory access out of
> bounds, time cycles exhausted, etc -- because no OS is bulletproof
> (though some seem to take some sweeps at it -- see crashme utility),
> so we have to do in a virtual machine like Bochs). We thus have a
> population of mutation functions, which are being invoked to operate
> upon other members of the population. Clearly, there is a lot of
> knowledge necessary to modify machine code while not produce programs
> which bomb. We have no (or, at least, very little) idea of how to do
> this, so letting the machine solve this part of the task from scratch
> is prudent.

Dawkins mentions this sort of thing, briefly, in _Climbing Mount
Improbable_ under 'Kaleidoscopic Embryology' and the 'Evolution of
Evolability.' His example is symmetry: what's good for the left side
is generally good for the right side, so evolving a genetic encoding
that allows the same variation to express itself on both is a good
thing. For instance, if I have two genes controlling the length of my
two legs, then a mutation in one will leave me walking round in
circles. If, instead, I have one gene controlling both, then my odds
of getting something useful a much better. I constrain the space of
possibilities and make the search more tractable.

In the general case, evolution is a graph search. Each possible
individual is a node, and mutations spell out the transition rules. By
manipulating the space of transition rules, we transform the search
space - we make some points closer together and some further
apart. Perhaps more importantly, local maxima can be eradicated
since we change the set reachable nodes. (This analysis still
holds if we consider crossover operators, every node becoming a
population.)

Finding transition rules which work more often than random - perhaps
by a meta-evolution over mutation operators - is formally identical to
developing a heuristic. The really exciting thing is that you don't
have to stop at meta-evolution, you can steam on ahead to
meta-meta-evolution: you get heuristics, heuristics for developing new
heuristics, heuristics for developing heuristics which develop new
heuristics, and so forth.

In addition to doing a search over mutation operators, as Eugene
mentioned, you can also evolve the semantics (the embryology) of
genotypes. For example, a point mutation operator which flips a bit
somewhere can have any consequence at all, dependant upon the
interpretation of the bit it flips. We can then favor representation
systems which tend to be mutable in useful ways. This is the notion of
'representation as a heuristic.'

Take for example a program written in C and a program written in Ada:
both are equivalently expressive - they're both equally good notations
for Turing machine tapes - but a well-formed Ada program is more
likely to do something meaningful than a C program, since type safety
is static requirement for valid programs in Ada. Maybe this example
isn't convincing - but consider C and a strongly-typed variant: the
variant is no less expressive, but contains fewer programs and
excludes more dopey ones than it does useful ones. (Heuristically
speaking.) There's a connection here to compression and information
entropy - we make good ideas short and bad ideas long.

Or consider mutations, constrained to well-formed programs, operating
on a C program vs. on an assembly program, you're more likely to
accomplish something transforming a call to Build_Nest() into
Dig_Hole() than JMP 100000 into JMP 200000. There are potential big
wins if you can embed semantics in your representation syntax. Function
-> Function is far better than Address -> Address.

Combined with some of the learning-as-scientific-discovery and
concept-learning-as-represention formation stuff, I think this open up some
interesting avenues for research.

Which is all to say, in effect, screw doing it by hand. ;/

-matt



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:32 MDT