From: nanowave (nanowave@shaw.ca)
Date: Tue Aug 12 2003 - 12:05:16 MDT
>So we already have examples of individuals getting *really*
>annoyed and doing something about it. It is probably not
>going to be a good thing if individuals get *really* annoyed
>and have an AI at their disposal to do something about it.
>
>Robert
Yes, the possibility of unwittingly unleashing a kind of "berserker" upon
the earth would seem to be not statistically insignificant, nonetheless, I
remain confident (if only via gut feeling) that this kind of
Saberhagen-esque scenario will never come to pass. Is it completely
unreasonable to presume that 'human-level' intelligence implies human-level
ethics and values (two kinds of social intelligence) as well as pure
knowledge crunching/combinatorial power?
A tendency toward malevolence and unnecessary destruction are still
considered entropic and SUB-human in these parts are they not?
Robert? ;-)
Russell Evermore
This archive was generated by hypermail 2.1.5 : Tue Aug 12 2003 - 12:20:05 MDT