Re: Profiting on tragedy? (was Humour)

Eugene Leitl (Eugene.Leitl@lrz.uni-muenchen.de)
Sat, 28 Dec 1996 16:39:06 +0100 (MET)


On Fri, 27 Dec 1996, Lee Daniel Crocker wrote:

> [...]
>
> If one buys Rand's contention that normative philosophy (ethics,
> politics) can be rationally derived from objective reality, then we
> can assume that very intelligent robots will reason their way into
> benevolence toward humans. I, for one, am not convinced of Rand's
> claim in this regard, so I would wish to have explicit moral codes
> built into any intelligent technology that could not be overridden

This would assume AI can be achieved with procedural systems, which I
don't think is possible. Asimov's Laws of Robotics must remain a fiction,
alas. Real world is much too fuzzy to be safely contained by simple rules.

> except by their human creators. If such intelligences could reason
> their way toward better moral codes, they would still have to
> convince us humans, with human reason, to build them.
>

ciao,
'gene