Robin Hanson wrote:
>
> Mere mention of the work "feedback" is not sufficient to argue for a sudden
> and sustained acceleration in growth rates, which is what you seem to claim.
I didn't just "mention" it; I talked about the behavior of the sum of the series of I'1 = C(O, P, I), I'2 = C(P, O, I + I'1), I'3 = C(P, O, I + I'1 + I'2), etc. I don't see any realistic way to get steady progress from this model. Flat, yes, jumps, yes, but not a constant derivative.
You can't apply the same optimization trick over and over again; that's like
the old joke about compressing Usenet down to one byte with lossless
compression. If optimization yields a small jump, then the next increment of
optimization is likely to be zero, since much the same method is being used.
If optimization yields a big jump, one that translates into a substantial
amount of power freed up for intelligence, the AI is likely to redesign itself
in a fairly major way - from 1.1 to 2.0, or at least 1.0 to 1.1. Major
repartitioning of the computational modules, and whatnot, which in turn is
likely to lead to a large jump in intelligence and optimization.
Now either these large steps keep repeating to superintelligence, or at some
Final remark: Given the relative computational requirements of consciousness and algorithmic thinking, and given the Principle of Mediocrity, and given the relative linear speeds and the relative processing power compared to the human brain, I would find it to be a remarkable coincidence if a major jump was slowed down exactly enough that it looked like slow and steady improvement on the human timescale, and not flat or vertical. It might happen, because the human programmers could be unable to work on things that happened on other scales, but it wouldn't happen by coincidence.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/AI_design.temp.html http://pobox.com/~sentience/sing_analysis.html Disclaimer: Unless otherwise specified, I'm not telling you everything I think I know.