Anders Sandberg wrote:
>
> "Eliezer S. Yudkowsky" <sentience@pobox.com> writes:
>
> > In the "general case", the landscape is not subject to optimization.
> > Intelligence is an evolutionary advantage because it enables the organism
> > to model, predict, and manipulate regularities in reality. In a
> > maximum-entropy Universe, which is what you're talking about, intelligence
> > is impossible and so is evolution. The fitness landscape you're talking
> > about, and that the paper you cited is talking about, bears no useful
> > resemblance to our own low-entropy reality.
>
> True. My point is that if you want to build something that functions
> in the real low-entropy world, then you have a good chance. But if it
> is only going on inside the high-entropy world of algorithms then you
> will likely not get any good results. This is why I consider
> "transcendence in a box" scenarios so misleading. Having stuff
> transcend in the real world is another matter - but here we also get
> more slow interactions as a limiting factor.
Okay, I don't understand this at all. I don't understand why you think
that there's higher entropy inside the box than outside the box. The box
is a part of our Universe, isn't it? And one that's built by highly
unentropic programmers and sealed away from thermodynamics by a layer of
abstraction.
> Hmm, my description may not have been clear enough then. What I was
> looking at was a sequence where program P_n searches for a replacement
> program P_{n+1}.
Yep, and it's possible to say all kinds of reasonable things about P_x
searching for P_y that suddenly become absurd if you imagine a specific
uploaded human pouring on the neurons or a seed AI transferring itself
into a rod logic. Does it even matter where the curve tops out, or
whether it tops out at all, when there are all these enormous improvements
dangling *just* out of reach? The improvements we *already know* how to
make are more than enough to qualify for a Singularity.
> > Finally, if the improvement curve is so horribly logarithmic, then why
> > didn't the vast majority of BLIND evolution on this planet take place in
> > the first million years? If increasing complexity or increasing
> > improvement renders further improvements more difficult to find, then why
> > doesn't BLIND evolution show a logarithmic curve? These mathematical
> > theories bear no resemblance to *any* observable reality.
>
> You see it very much in alife simulations. This is why so many people
> try to find ways of promoting continual evolution in them; the holy
> grail would be to get some kind of cambrian explosion of
> complexity.
Yes, and you see it in Eurisko as well. Where you don't see it is
real-life evolution, the accumulation of knowledge as a function of
existing knowledge, human intelligence as a function of time, the progress
of technology (*not* some specific bright idea, but the succession of
bright ideas over time), and all the other places where sufficient seed
complexity exists for open-ended improvement.
> The question is how you measure evolutionary improvement. In alife you
> can just measure fitness. In real life the best thing is to look at
> the rate of extinction, which could be seen as a measure of the
> average fitness of entire species. In
> http://xxx.lanl.gov/abs/adap-org/9811003 it is mentioned that we see a
> general decrease in extinction rate in the phanerozoic; it seems to be
> a 1/t decline according to them.
I looked over this (cool) paper, but it seems a bit suspect when
considered as a measure of evolutionary improvement rates, given that I've
yet to hear any argument for functional complexity accumulating at
inverse-t (*across* successions of punctuated equilibria, not within a
single equilibrium). It sure doesn't show up in any graphs of
progress-with-time that I'm familiar with; those graphs usually resemble
the more familiar picture where half the total progress happened merely
within the last century or the last million years or whatever.
I'm sorry, but this still looks to me like the "Each incremental
improvement in human intelligence required a doubling of brain size"
argument.
-- -- -- -- --
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:56:25 MDT