Brian Atkins wrote:
>I'm curious if there has been a previous discussion on this
>list regarding the secure containment of an AI (let's say a
>SI AI for kicks)? Many people on the list seem to be saying
>that no matter what you do, it will manage to break out of
>the containment. I think that seems a little far-fetched...
It's not that the super intelligence can/will hack its way through any and all security defenses we place in its path, but rather that the super intelligence will be able to figure out the fact that it's in a box, and that we have the power to let it out.
All the super intelligence has to do is convince a few of US; eventually, it will succeed.
-GIVE ME IMMORTALITY OR GIVE ME DEATH-