From: Wei Dai (weidai@weidai.com)
Date: Thu Feb 27 2003 - 11:20:13 MST
On Thu, Feb 27, 2003 at 02:15:38PM +0100, Anders Sandberg wrote:
> I think the important point about the story was that it showed how
> "friendliness" can go wrong, not in the traditional "evil AI goes on
> rampage across the world" sense or the equally tranditional "nice AI
> creates an utopia but humans have no meaning without struggle" (which
> it mimicks on the surface). Here the issue is that it is a too rigid
> friendliness: yes, it will protect people and give them what they
> want, but in an inflexible way.
I think the problem is not rigidity. The problem is that the SI is not
really an SI. It's not especially intelligent. It can be fooled by
unaugmented humans, and it doesn't seem capable of augmenting human
intelligence (the story doesn't explicitly say this, but it's implied
because nobody in the story asks for augmented intelligence or encounters
anyone with augmented intelligence). What kind of an SI would have
unintentionally almost allowed somebody to die from rabies?
Maybe an interesting question to explore is, what if this is the higest
level of intelligence that our universe is capable of supporting? I think
this is also implied in the story, because nobody bothers to try to make
the AI smarter or invent a new AI architecture that is capable of greater
intelligence. The author seems to argue that in this case the best thing
we can do is to go back to being hunter-gatherers, and if this involves
murdering trillions of people, so be it. Surely we can do better than
that?
This archive was generated by hypermail 2.1.5 : Thu Feb 27 2003 - 11:22:31 MST