COMP: Re: Profiting on tragedy? (was Humour)

Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Sun, 29 Dec 1996 11:19:48 +0100 (MET)


On Sat, 28 Dec 1996, Lee Daniel Crocker wrote:
> > [...]
>
> And I'm supposed to accept your wild speculations over mine? If that's
> what will happen when I hard-wire a robot not to kill me, then so be it.
> I leave those wires where they are. If, and only if, I can rationally
> convince myself--with solid reason, not analogy and extrapolation--that
> clipping those wires will be in my interest, will I consider it.

Surely, a Power will be slightly more complicated than a human being.
Surely, because of the complexity/robustness necessary a Power will need
nonalgorithmic control. If yes, then the whole concept of "hardwired
behaviour" is meaningless. Imagine you could control a human: if a threat
is detected you could disrupt his motorics, so that he is frozen
immobile. But how does one detect a threat? There are zillion
possiblities how a human being might kill another one with dire violence,
and even more of them how it could be done with cunning. To catch them
all, the filter must become arbitrarily complicated, nay, even sentient.
Clearly, such a filter should become unreliable?

ciao,
'gene