Re: Singularity optimization [Was: Colossus and the Singularity]

From: Samantha Atkins (samantha@objectent.com)
Date: Sat Jan 27 2001 - 14:43:34 MST


Anders Sandberg wrote:
>

>
> Why? The learning system needs to learn about its "environment" (in
> this case the space of algorithms and how it translates into computer
> code) and move relevant information (relevance is measured by the
> system by its value functions) from this environment into itself. Just
> making deductions doesn't seem to be an efficient way of doing this,
> since they are based on the pool of knowledge already in the system; a
> program cannot increase its algorithmic complexity just by extending
> itself with code it writes. So what is left is empirical
> experimentation with code (guided by deductions from already learned
> stuff). The system needs to learn from its environment about many
> things, including what kind of statistical environment it is - if it
> is a smooth fitness landscape code optimization is best done by
> hill-climbing or something similar, while on a rugged but regular
> landscape other algorithms are needed.

Is there a good reason you left out induction?

> What about the space of code relevant for AI? Most likely it is nicer
> than the general case, so there are some seed programs that are on
> average better than others. But we should remember that this space
> still contains Gödel-undecidable stuff, so it is by no means
> simple. The issue here is whether even the most optimal seed program
> converges towards a optimum fast enough to be interesting. To
> complicate things, programs can increase the dimensionality of their
> search space by becoming more complex, possibly enabling higher values
> of V(P) but also making the search problem exponentially harder. So
> there is a kind of limit here, where systems can quickly improve their
> value in the first steps, but then rapidly get a search space that is
> getting more and more unmanageable. I guess the "intelligence" of such
> a system would behave like the logarithm of time or worse - definitely
> not exponential.
>

<the following is sloppy but may be worth some thought>

Godel undeciability is probably not terribly relevant given an ability
to make inductive leaps and tests them and enough loosness in the system
to allow workable solutions to be discovered and deployed that are not
fully mathematically rigourous.

If the searching mechanisms are capable of expanding the paralleism of
the searching and of combining together branches in ways difficult to
express mathematically (rather slooppy like within the brain) I think
the above arguments are unnecessarily restrictive.

> I would recommend Christoph Adami's _Introduction to Artificial Life_,
> Sprinmger, New York 1998 for a discussion of environment
> information gathering in evolving programs (based on his papers on
> this; see http://www.krl.caltech.edu/~adami/cas.html).

I would not say it is what you have shown. It is what you have argued.

>

>
> It should be noted that the choice of values is not just important,
> but likely the most complex problem! There seems to be some general
> rule (a bit of experience-based handwaving here) that if you
> reformulate an optimisation problem so that the solution algorithm
> becomes very simple (like a GA or neural network) then a lot of
> efforts has to be spent on setting parameters like the fitness
> function or network parameters; the total effort of writing the
> algorithm, setting parameters and running it appears to be roughly
> constant (it would be interesting to see if this can be proven). So
> the AI value function can likely be as complex as the finished program
> itself if the seed program is simple! I think this is because the
> above kind of systems doesn't interact much with any external
> environment feeding them extra information, all the algorithmic
> complexity has to come "from inside". In real life, the harsh fitness
> function of a complex reality already holds a tremendous amount of
> information.
>

Yes. Built-in forcing functions that will force conclusions to be made
and tried regardless of mathematical completeness. This is a good
point.

> To sum up, I doubt Colossus, even given its cameras and missiles,
> would have been able to bootstrap itself very fast. It seems it had
> been given a very simple value function, and that would likely lead to
> a very slow evolution. To get a real AI bootstrap we probably need to
> connect it a lot with reality, and reality is unfortunately rather
> slow even when it is Internet.
>

Another possibility is an ALife micro-world operating at higher speed
but with less driving complexity.

- samantha



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:25 MDT