Anders Sandberg wrote a number of cogent objections to the 'SI apotheosis' scenario. Rather than doing a point-by-point response, let me restate my case in more detailed terms.
First, I don't think the super-AI scenario is inevitable, or even the most likely one. It becomes a serious possibility only given the following conditions:
Discard any one of these, and you no longer have a problem. However, if all of these assumptions are true, you have a very unstable situation.
Our SI-to-be isn't some random expert system or imitate-a-human program. Its Eliezer's seed AI, or something like it written by others. Its an AI designed to improve its own code in an open-ended manner, with enough flexibility to do the job as well as a good human programmer.
Now, the first increment of performance improvement is obvious - it writes code just like a human, but it runs faster (not smarter, just faster). It also has some advantages due to its nature - it doesn't need to look up syntax, never has a typo or misremembers a variable name, etc. Together these factors produce a discontinuity in innovation speed. Before the program goes on line you have humans coding along at speed X. Afterwards you have the AI coding at speed X^6 (or maybe X^3, or X^10 - it depends on how fast the computers are).
At that point the AI can compress years of programming into days, hours, or even minutes - a short enough time scale that humans can't really keep track of what it is doing. If you shut it down at that point, you're safe - but your average researcher isn't going to shut it down. Eliezer certainly won't, and neither will anyone else with the same goal.
At this point the AI will figure out how to actually make effective use of the computational resources at its disposal - using that hardware to think smarter, instead of just faster. Exploiting a difference of several orders of magnitude should allow the AI to rapidly move to a realm well beyond human experience.
Now we have something with a huge effective IQ, that is optimized for writing code and thinking about thought. Any human skill is trivial to a mind like that - it might not be obvious, but it won't take long to invent if it has the necessary data. From here on we're all talking about the same scenario, so I won't repeat it again.
So, is it the assumptions you don't buy, or the reasoning based on them?
Billy Brown, MCSE+I