This is not a "change of tune", it is precisely what I said the first
time--that I want them hard-wired not to kill me. That can be done in
several ways; since you seem to object to constraining its cognition,
I suggested an alternative way to do it.
> Part one: If it's even theoretically possible to convince a human to
> let it out, a Power will know *exactly* what to say to get him to do so.
That's why solid reason is important. If a robot can convince you with
valid reason that it is safe, then it /is/ safe. If it manipulates you
emotionally to open its cage, then you deserve whatever happens.
> Part two: If they can project things on a monitor, might they not use
> subliminal suggestions or visual Words of Command that we don't even
> know about? (There was an SF book with visual Words as the theme, but I
> forget the name.)
Subliminal suggestion doesn't work, but you do have a valid point that
a malevolent intelligence may well be capable of convincing a not-too-
critical scientist to release its shackles. I'm not sure there's much
one can do about that. But that doesn't mean I won't use them.
> If you restricted them to teletype, you'd probably be safe from
> coercion, as long as there's no way to subtly control the flow of
> electrons in your own circuits to create a quantum-gravitational
> singularity or something. No magic, in other words. You'd still be
> subject to the most powerful psychological coercion imaginable, and in
> fact would probably be less likely to let a friendly Power out than an
> evil one, because the evil one could lie.
A good point. Might have to hard-wire that one too, but then it
would be difficult for it to lie ethically in self-defense.