Ah, don't I wish I had a bloody microwave!
Eliezer, I must admit that I have not yet read your essay on this topic, so please forgive me if I am raising points you already raise.
My apologies, I haven't been completely clear. My problem with this notion of AI is that it is inherently circular, in that ultimately, the only way we could know that the AI is phenominally more intelligent than any of us is for a being of phenominally high intelligence to tell us so. Let's say that we determine the intelligence of an AI by the number of right "answers" it gives us (answers being defined here as correct solutions to problems and/or questions in all fields, science to philosophy - a haphazard definition, so feel free to correct me, and I'll reassess). Somewhere down the line, the AI is going to give an answer that does not concur with what is believed by the human populace to be the right answer. This is inevitable, since it is all but certain that we as a species are wrong in some of our beliefs. In addition, if the AI agreed with everything the human populace agreed with, it would be pretty useless to us as a Power. Now, when the AI hit one of these points, and comes up with an answer contrary to what we believe to be true, there is no way of knowing whether the AI is right or mistaken, for there is no outside third party (which would have to be more intelligent than either the AI or the humans) to mediate. Therefore, sure, I'm willing to grant that a Power is possible. However, we cannot be certain that an AI /is/ a Power in the sense that we cannot be certain that it is sufficiently more intelligent than us. Therefore, if the AI decided that the human species should be obliterated, I would be justified in calling it a bad judgement call and taking arms against it.
I feel as if I'm still not being terribly clear, and once again, I apologize. I would be happy to answer any questions or challenges.