Brian Atkins writes:
> I'm curious if there has been a previous discussion on this
> list regarding the secure containment of an AI (let's say a
> SI AI for kicks)? Many people on the list seem to be saying
> that no matter what you do, it will manage to break out of
> the containment. I think that seems a little far-fetched....
Here's why I don't think containment is feasible for an SI:
Even if you make a perfectly secure sandbox, we still aren't safe. Never underestimate the security risk posed by social engineering:
4) You have to have human/AI contact to have any idea what the AIs are like. This opens up lots of potential problems - the AI can talk someone into letting it out, bribe them to do it, 'give away' useful (or fantastically valuable) programs that contain seeds of itself, etc.
5) Don't forget the legal front. The AI could try to convince people that it is a person, and you are keeping it as a slave (not hard to do, since that's exactly what is happening). If it acts as its own lawyer, you're probably going to loose the case.
6) Do reporters ever talk to the AI? Of course they do. Think of the PR campaign the 'poor, helpless, exploited' AI could mount.
Some of these problems are bigger than others, but that isn't the point. The real problem is that I thought of all these approaches in the space of 15 minutes, and I'm only human. What is something with an IQ of 1,000 (or worse, 1,000,000) going to think of?