Re: SITE: Coding a Transhuman AI 2.0a

From: Eugene Leitl (
Date: Mon May 22 2000 - 20:01:44 MDT

Dan Fabulich writes:
> Gene quipped:
> As usual: Necessary, but not sufficient.

But an evolutionary process not only writes stuff in opaque fashion,
it does typically much better than the human programmer, too. A
darwinian process solving a limited-scale problem produces FPGA
circuits consisting from a mesh of coupled positive and negative
autofeedback loops (duh, "strangely" our brain hardware works that
way, too), which solve a task a much more compact fashion than a human
designer (alas, so far this doesn't scale to high-complexity
tasks). Since human designers typically take decades to make, in a
very expensive process with a very slow yield, it is perhaps best
limit the number of human experts as close to zero as it is possible.

It is not really necessary to have human or superhuman level process
to generate code of a complexity higher than itself, in fact we
wouldn't be there, would it be the case. We have no idea how to code
an all-purpose metamethod, however, with the exception of evolutionary
algorithms applied on the algorithm descriptions itself. This is the
only metamethod which we certifiedly know works. We ourselves are
instances of another metamethod (intelligence) produced by a
metamethod (darwinian evolution), unfortunately, we don't come with a
manual. Being so very poorely documented makes reverse-engineering in
a fashion other than clean-room quite difficult (we haven't been
designed with understandability by ourselves as part of the spec
sheet), and also I surmise (YMMV) we'll probably run into an
evolutionary algorithm as the algorithm operating very close to the
hardware layer, so switching to darwin in machina right from the start
appears prudent, if we want to address the issue in a focused fashion
other than perpetuating the AI debacle.

We don't know how to write programs automatically in a robust fashion
yet, however. So, the first problem would seem to mutate a population
of mutation functions in a machine language (say, x86), screening the
result for obvious nonviability (memory access out of bounds, time
cycles exhausted, etc -- because no OS is bulletproof (though some
seem to take some sweeps at it -- see crashme utility), so we have to
do in a virtual machine like Bochs). We thus have a population of
mutation functions, which are being invoked to operate upon other
members of the population. Clearly, there is a lot of knowledge
necessary to modify machine code while not produce programs which
bomb. We have no (or, at least, very little) idea of how to do this,
so letting the machine solve this part of the task from scratch is

After we find our magic mutation function, it fitness landscape will
react much more benign to mutations. (This is really, really, really
important. Believe me.) This process has to be repeated, wherever we
change our substrate, be it machine programs, massively parallel
machine programs tightly coupled by asynchronous message passing,
FPGAs, or cellular automata patterns. (In fact, in latter case one has
also to coevolve the rule, which increments the self-referentiality by
another notch).

As measured by the metric of a rodent (a mouse..rat), I don't see such
intelligence in a machine other than evolvable hardware (EHW) boxes,
which are run by evolutionary algorithms. Of course, you all are
willing to try (I haven't read yet, but
I will once my deadline is past), but I just don't see it sprout legs,
blowing you a raspberry, and running off, sorry.

This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:31 MDT