"Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
> In the "general case", the landscape is not subject to optimization.
> Intelligence is an evolutionary advantage because it enables the organism
> to model, predict, and manipulate regularities in reality. In a
> maximum-entropy Universe, which is what you're talking about, intelligence
> is impossible and so is evolution. The fitness landscape you're talking
> about, and that the paper you cited is talking about, bears no useful
> resemblance to our own low-entropy reality.
True. My point is that if you want to build something that functions
in the real low-entropy world, then you have a good chance. But if it
is only going on inside the high-entropy world of algorithms then you
will likely not get any good results. This is why I consider
"transcendence in a box" scenarios so misleading. Having stuff
transcend in the real world is another matter - but here we also get
more slow interactions as a limiting factor.
> You completely left recursive self-enhancement out of your description of
> the process. You described a constant (and pretty darn blind-looking)
> function trying to improve a piece of code, rather than a piece of code
> improving itself, or a general intelligence improving vis component
> subprocesses.
Hmm, my description may not have been clear enough then. What I was
looking at was a sequence where program P_n searches for a replacement
program P_{n+1}. It is recursive, although being a theoretician I
tried to frame it in such a way that it could also include both the
single program looking for optimization or the development of whole
software ecologies within the sumbol 'P'. I guess I should have
formulated myself more clearly. The danger of just writing a post and
not reading it through carefully.
> Finally, if the improvement curve is so horribly logarithmic, then why
> didn't the vast majority of BLIND evolution on this planet take place in
> the first million years? If increasing complexity or increasing
> improvement renders further improvements more difficult to find, then why
> doesn't BLIND evolution show a logarithmic curve? These mathematical
> theories bear no resemblance to *any* observable reality.
You see it very much in alife simulations. This is why so many people
try to find ways of promoting continual evolution in them; the holy
grail would be to get some kind of cambrian explosion of
complexity.
(In RL I think that explosion was likely due to the appearance of
flexible homeobox body plans and co-evolutionary arms races between
shelled predators and prey, the first a case of discovering a regular
and easily evolvable subspace that the second made it profitable to
exploit.)
And I think there was actually this fast learning once replicators
appeared on Earth. It just didn't leave any traces we can find. Over
the next eons a lot of stuff happened, but rather slowly. All the
"quick" evolution the see in the modern multicellular world (Oh, I
feel a case of "In my day, we didn't have fancy metazoan body plans,
we had to make do on our own in the primordial soup!" coming up :-) is
actually an evolution of high-level body plans rather than the basic
metabolic stuff that took eons to learn. And even this body plan
development slowed down quickly after Cambrium, leaving the room open
for further bursts as further high-level systems were added.
> If BLIND evolution is historically observed to move at a linear or better
> rate, then self-improvement should proceed at an exponential or better
> rate. Differential equations don't get any simpler than that.
The question is how you measure evolutionary improvement. In alife you
can just measure fitness. In real life the best thing is to look at
the rate of extinction, which could be seen as a measure of the
average fitness of entire species. In
http://xxx.lanl.gov/abs/adap-org/9811003 it is mentioned that we see a
general decrease in extinction rate in the phanerozoic; it seems to be
a 1/t decline according to them.
Then we get the problem of how much better deliberate design is over
blind evolution. This depends a lot on the fitness landscape and
whether there exists detectable regularities. Intelligent systems are
able to not just optimize something, but to use information from an
environment to find an effective way of doing it. In the general case
intelligence is of course no use, but as you point out, this is
fortunately a low entropy world where intelligence has some
applications :-)
A certain intelligent system will still be better at some forms of
environments than others - a good intelligence will work well in
likely environments. The problem here is the amount of information
needed to understand the environment well enough to be smart; it has
taken humanity 500 years of the scientific method to get this far, and
we still have fairly little experience of AI programming to tell what
kind of environment that is.
So I guess we cannot at present clearly say how much better deliberate
design is in brain building. But it seems to be a doable question with
some research, even today.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:25 MDT