Samantha Atkins <samantha@objectent.com> writes:
> Is there a good reason you left out induction?
No, just forgot about it. Replace 'deduction' with 'deduction and
induction' in the text, and it still makes sense.
> > What about the space of code relevant for AI? Most likely it is nicer
> > than the general case, so there are some seed programs that are on
> > average better than others. But we should remember that this space
> > still contains Gödel-undecidable stuff, so it is by no means
> > simple. The issue here is whether even the most optimal seed program
> > converges towards a optimum fast enough to be interesting. To
> > complicate things, programs can increase the dimensionality of their
> > search space by becoming more complex, possibly enabling higher values
> > of V(P) but also making the search problem exponentially harder. So
> > there is a kind of limit here, where systems can quickly improve their
> > value in the first steps, but then rapidly get a search space that is
> > getting more and more unmanageable. I guess the "intelligence" of such
> > a system would behave like the logarithm of time or worse - definitely
> > not exponential.
>
> <the following is sloppy but may be worth some thought>
>
> Godel undeciability is probably not terribly relevant given an ability
> to make inductive leaps and tests them and enough loosness in the system
> to allow workable solutions to be discovered and deployed that are not
> fully mathematically rigourous.
Sure. But the existence of Godel nastiness in the workspace imples
that other nastiness is also likely, like statements that look very
likely but actually fail very rarely and unpredictably.
> If the searching mechanisms are capable of expanding the paralleism of
> the searching and of combining together branches in ways difficult to
> express mathematically (rather slooppy like within the brain) I think
> the above arguments are unnecessarily restrictive.
Hmm? Massive parallelism doesn't solve the problem of combinatorial
explosions, they just delay them a little bit. And combining together
branches in complex ways is expensive, and combining them in a sloppy
analogous fashion only works as long as the environment you are
working in has useful regularities. If the space of AI programs has
some simple regularity you can discover, then it is possible to use
this to make progress more efficient. But if it doesn't have this
regular structure, then analogies will not be efficient. I believe,
given the kind of complexities a space of programs entails, that it is
much more likely that this holds for the AI problem.
It is a bit like Chaitins proof that most computer programs are not
compressible and essentially random; if you choose a program at random
it will likely be messy and the effort to understand it equal to the
number of bits in the program. When we program we build stuff from
modules, using small pieces of more irreducible code to build larger
systems. That is a nice subspace of the program space, but we have no
evidence AI is inside it, especially not the road towards greater and
greater AI.
Analogies work well for us humans because we usually live in the same
environment, which does have a fairly regular structure. If you move
into a mathematical environment analogies become noticeably less
efficient, at least in some mathematical branches.
> > I would recommend Christoph Adami's _Introduction to Artificial Life_,
> > Sprinmger, New York 1998 for a discussion of environment
> > information gathering in evolving programs (based on his papers on
> > this; see http://www.krl.caltech.edu/~adami/cas.html).
>
> I would not say it is what you have shown. It is what you have argued.
Huh? What I meant was that the book to a large extent is based on his
papers. He has some newer results out now, including a nice Nature
paper.
> > To sum up, I doubt Colossus, even given its cameras and missiles,
> > would have been able to bootstrap itself very fast. It seems it had
> > been given a very simple value function, and that would likely lead to
> > a very slow evolution. To get a real AI bootstrap we probably need to
> > connect it a lot with reality, and reality is unfortunately rather
> > slow even when it is Internet.
> >
>
> Another possibility is an ALife micro-world operating at higher speed
> but with less driving complexity.
But then you will have to supply a suitable ALife microworld, one that
has enough complexity and emergence to really supply anything to the
emerging AI. Compare the real world to Game of Life - in the real
world new layers of complexity emerge from lower layers many times
(quantum fields -> quarks -> hadrons -> atoms -> molecules ->
chemistry -> biology -> ecology and so on), while in Game of Life we
apparently get just one layer of emergence (gliders and similar
patterns) but no further layers.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:25 MDT