Re: Singularity optimization [Was: Colossus and the Singularity]

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Sat Jan 27 2001 - 11:43:15 MST


Anders Sandberg wrote:
>
> The problem here with the search is that if the current program is P
> (which maps integers to candidate new programs which are then
> evaluated), the empirical search process is of the form Pnew =
> argmax_i V(P(i)) where V is the estimated value of solution
> P(i). Hill-climbing can be seen as having programs that generate
> programs similar to themselves, genetic algorithms would have
> 'programs' P that are really populations of programs and the deductive
> 'rationalist' approach would be a program that generates a single
> successor that has a higher V than itself. Now, where does this search
> end up? In the general case the landscape will not be amenable to
> *efficient* optimization at all (due to the TANSTAAFL theorems) - any
> initial program in the set of all seed programs will statistically do
> just as well as any other. This is simply because most optimization
> landscapes are completely random.

"Any form of cognition which is mathematically formalizable or has a
provably correct implementation is too simple to contribute materially to
intelligence."

It's interesting that the people who try to visualize the process
mathematically are the ones who are most likely to deny the "hard takeoff"
scenario. I myself have always believed that if you can't solve the
three-body problem for gravitation, you sure as heck can't solve the
trillion-transistor problem for intelligence. The process you describe
bears no recognizable resemblance to the way that I, a general
intelligence, write code.

In the "general case", the landscape is not subject to optimization.
Intelligence is an evolutionary advantage because it enables the organism
to model, predict, and manipulate regularities in reality. In a
maximum-entropy Universe, which is what you're talking about, intelligence
is impossible and so is evolution. The fitness landscape you're talking
about, and that the paper you cited is talking about, bears no useful
resemblance to our own low-entropy reality.

If you drain out all the interesting complexity of intelligence and
abstract the scenario to the point that it talks about the space of all
possible Universes, then I guess a fast Singularity is impossible - as
impossible as intelligence or evolution, two other cases of directed
improvement.

Here in our own world, I find it much easier to visualize concretely the
effect of a human being capable of reprogramming vis own neurology, or
switching axons for optical fibers to get a millionfold subjective
speedup, or an AI developing the visualization process needed to invent
nanotechnology and moving to a 10^21 ops/sec rod logic. That's a pretty
darn fast Singularity, regardless of whether the abstract curve exhibits
any recognizable mathematical behavior.

You completely left recursive self-enhancement out of your description of
the process. You described a constant (and pretty darn blind-looking)
function trying to improve a piece of code, rather than a piece of code
improving itself, or a general intelligence improving vis component
subprocesses.

Finally, if the improvement curve is so horribly logarithmic, then why
didn't the vast majority of BLIND evolution on this planet take place in
the first million years? If increasing complexity or increasing
improvement renders further improvements more difficult to find, then why
doesn't BLIND evolution show a logarithmic curve? These mathematical
theories bear no resemblance to *any* observable reality.

If BLIND evolution is historically observed to move at a linear or better
rate, then self-improvement should proceed at an exponential or better
rate. Differential equations don't get any simpler than that.

-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:25 MDT