From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Feb 27 2003 - 14:06:27 MST
Wei Dai wrote:
> On Thu, Feb 27, 2003 at 02:15:38PM +0100, Anders Sandberg wrote:
>
>>I think the important point about the story was that it showed how
>>"friendliness" can go wrong, not in the traditional "evil AI goes on
>>rampage across the world" sense or the equally tranditional "nice AI
>>creates an utopia but humans have no meaning without struggle" (which
>>it mimicks on the surface). Here the issue is that it is a too rigid
>>friendliness: yes, it will protect people and give them what they
>>want, but in an inflexible way.
>
> I think the problem is not rigidity. The problem is that the SI is not
> really an SI. It's not especially intelligent. It can be fooled by
> unaugmented humans, and it doesn't seem capable of augmenting human
> intelligence (the story doesn't explicitly say this, but it's implied
> because nobody in the story asks for augmented intelligence or encounters
> anyone with augmented intelligence). What kind of an SI would have
> unintentionally almost allowed somebody to die from rabies?
See some discussion of this story on the AGI list:
agi@v2.listbox.com/thrd3.html">http://www.mail-archive.com/agi@v2.listbox.com/thrd3.html
-- Eliezer S. Yudkowsky http://singinst.org/ Research Fellow, Singularity Institute for Artificial Intelligence
This archive was generated by hypermail 2.1.5 : Thu Feb 27 2003 - 14:09:20 MST