> Let's say that
> we determine the intelligence of an AI by the number of right "answers" it
> gives us (answers being defined here as correct solutions to problems
and/or
> questions in all fields, science to philosophy - a haphazard definition,
so
> feel free to correct me, and I'll reassess). Somewhere down the line, the
> AI is going to give an answer that does not concur with what is believed
by
> the human populace to be the right answer.
For this you have to consider the method by which the AI in question "thinks". Humans are not objective intelligences that will always give the answer that the evidence points to - I hope I don't have to elaborate here. You use the word "belief". If the AI was as the layman usually conceptualises an AI, (an artificial human), then the AI will give you all the correct answers to mathematical problems and so forth, but any philosophical output will be tainted with a desire for certain things to be true. If this AI is instead a purely rational master problem solver, then humanity will surely disagree with much of its philosophical output too. It would have no motivation to glorify humanity or itself, and so would keep giving answers like "insufficient data" or "logical fallacy in input" or something.
>This is inevitable, since it is
> all but certain that we as a species are wrong in some of our beliefs.
Yes, here's the old "belief" chestnut again. Unless you explicitly build the AI to distort its worldview with "beliefs" by any mechanism, then to use it as an objective problem solver would produce results no better than a human. The strength of AI would be the ability to scrutinize hypotheses with impeccable logic.
> In
> addition, if the AI agreed with everything the human populace agreed with,
> it would be pretty useless to us as a Power.
Indeed.
> Now, when the AI hit one of these points, and comes up with an answer
> contrary to what we believe to be true, there is no way of knowing whether
> the AI is right or mistaken, for there is no outside third party (which
> would have to be more intelligent than either the AI or the humans) to
> mediate.
You don't need a mediator. The AI would certainly have the ability to provide a full explanation of it's "thought" processes. Otherwise, as you say, the output is useless.
>Therefore, sure, I'm willing to grant that a Power is possible.
> However, we cannot be certain that an AI /is/ a Power in the sense that we
> cannot be certain that it is sufficiently more intelligent than us.
This depends on how you define intelligence. If you define it as an ability to perform calculations in time, then any AI would vastly outperform any human. If you mean it as the human fitness function that it is, then what use is a swarve, charming and knowledgeable computer anyway? I suppose you could attach a big plastic knob to it or something......whatever yanks your chain.
> Therefore, if the AI decided that the human species should be obliterated,
I
> would be justified in calling it a bad judgement call and taking arms
> against it.
Anyone that constructed an AI that was capable of considering human extermination and provided the means to achieve it would have to be a nutter anyway. There's no need for the AI bit - just program a hacking program to launch nukes or something......
All this talk of AI, and it's obvious that most people here have no academic experience of AI. It's a very different bag o' nuts once you study it, let me tell you. There's no clear cut "intelligence", nobody has the faintest idea how to instantiate consciousness, and really there's nothing very special about an "intelligent" program that makes it conceptually different from a conventional program. To clear the old mind on such matters, start by thinking about constructing a solid definition for "intelligence", then think about how you might program a computer to posess this quality, and what use it would be. You will be disappointed, I'm afraid.