Bernard Hughes wrote:
> Following up on the silicon brain story posted earlier, I came across another
> interesting article in the EEtimes
>
> http://www.eetimes.com/story/OEG19981026S0015
>
> This one is about a Raytheon project to build an artificial soldier topped off with
> the Cog head from MIT. The neural net system proposed is of a type that is new to
> me, in that it assumes that the information passed among neurons comes as a result
> of the temporal synchrony among signals. Anybody else come across that theory?
>
> Building an autonomous, heavily armed soldier strikes me as a risky way to try out
> AI theory. A fixed place ABM system is one thing, but a general purpose killer
> robot seems like asking for trouble. If programming "do not harm humans" has some
> challenges, "kill all humans except for the friendly ones" seems fraught with
> peril. Reminds me of an old SF tale, I think called "I made you", in which a dying
> engineer is pinned down by the robot sentinel he built.
>
> I'd vote for keeping these guys immobile unless you are *really* sure of your IFF
> programming.
>
The motivation for robot soldiers does not come from ground commanders, but from bean counters and politicians. Any ground commander knows that a) you don't own any peice of land until you stick an 18 year old kid with a rifle on top of it, b) the worst threat to a free democracy is a professional army at the beck and call of the leadership with no personal stake in maintaining civil liberties. Such 'Robocop' concepts are flawed from basic principles, and should be stomped out wherever some idiot gets a hair up his butt to build one, unless the AI is granted citizenship and recognized as 'human equivalent' form the start (not bloody likely). Anything less is a prescription for tyrrany.
Mike Lorrey