Kate Riley wrote:
> Therefore, if the AI decided that the human species should be obliterated, I
> would be justified in calling it a bad judgement call and taking arms
> against it.
I do not see how you would get this opportunity. When we had the Foresight gathering last spring, one of the subgroups I joined studied how we could keep runaway AI from happening. Several AI researchers were in this subgroup. On of the participants was working on an AI based language translation system for a very large European technology company. The system is going to be supplying simultaneous voice language translation for telephone calls. He told us that the system had a huge amount of real world working 'knowledge' so that it could 'understand' what someone was talking about. We realized that if a system of this kind got sufficiently intelligent, it could start influencing the actions of the people that it was serving by slight shading of the meanings in the translations. The people might never know what hit them.
We also noted that if a program managed to infect the collection of routers on the Internet, it could arrange to make traffic patterns look like the net needed more routers. This one we *could* see coming because pretty soon the majority of our GNP would be going to building routers.
Every way we proposed to prevent runaway AI, we ourselves figured a way around. In the end we concluded that either it was not possible to stop, or that it would take greater minds than ours to do it. I am in the business of helping people see beyond what they think is impossible, but I must admit, it beats me.