Robin Hanson wrote:
And, to summarize briefly my reply from the Singularity debate:
All your examples, and moreover all your assumptions, deal with (1) roughly
constant intelligence and (2) a total lack of positive feedback. That is, the
model treats with the curve of a constant optimizing ability, which quite
naturally peters out. The improvements are not to the optimizing ability, but
to something else. The Flynn effect improves brains, not evolution. Moore's
law improves hardware, and even VLSI design aids, but not brains. There's no
positive feedback into anything, much less intelligence. With intelligence
enhancement, each increment of intelligence results in a new prioritized list,
since improving intelligence changes which improvements can be contemplated or perceived.
>
> Billy Brown wrote:
> >I was reading some of the previous debates on the Singularity in the list
> >archives recently, when it struck me that there is a major factor that does
> >not seem to have been seriously considered.
> >Simply put, the more advanced a technology becomes, the more work it takes
> >to improve it. As technology advances there is a general tendency for
> >everything to become more complex, which means more work for the engineers.
> >... Because of these factors, a Singularity is likely to have a slow takeoff.
>
> At the world level, average IQ scores have increased
> dramatically over the last century (the Flynn effect), as the
> world has learned better ways to think and to teach.
> Nevertheless, IQs have improved steadily, instead of
> accelerating. Similarly, for decades computer and
> communication aids have made engineers much "smarter,"
> without accelerating Moore's law. While engineers got
> smarter, their design tasks got harder.
Look back at the archives for more detailed arguments.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.