From: Anders Sandberg (asa@nada.kth.se)
Date: Thu Feb 27 2003 - 06:15:38 MST
On Wed, Feb 26, 2003 at 08:49:42PM -0700, Brent Allsop wrote:
>
> Folks,
>
> Did anyone else besides me reed this
> story?
I did a while ago.
> Warning, spoilers... I give away the
> ending here.
>
> To me it's basically a luddite story.
I disagree. A luddite story would have unequivocally said that a life
in the wilderness would be more authentic and better (regardless of
pain, disease, starvation and death) than a soulless hightech utopia.
But the story makes the ending rather grey; yes, people are free
again, but they have to live too short lives (which even the female
protagonist acknowledges; she *wanted* to live on to help her family
thrive and would do so if she could), do whatever it takes to survive
and have a future which is highly uncertain. There are also signs that
they might start the move towards technology again, and the results
will be unpredictable. Maybe Prime Intellect was the good alternative.
The point is that while the story follows the traditional escape from
utopia pattern, it does not come down squarely on one side or the
other. It leaves plenty of room for discussion, unlike a real
ideological book (imagine the story written by (say) Eliezer or
Ted Kaczynski in a mood for making converts). This seems to be borne
out by the authors discussion of it.
It is by no means a great novel, it is rather unpolished. But it does
have some interesting ideas.
> This AI is, for some reason, irrevocably based on Asimov's 3 laws of
> robotics. Evidently it would make to system to "unstable" to change
> any of these rules.
Actually, it write protected them from the protagonist (and likely
itself) besides having them implemented in a distributed fashion. In
some sense the use of the 3 laws of robotics was the silliest part of
the book, the one that most clearly showed it to be derivative. One of
the main points in Asimovs stories has always been to show how these
rules cause lots of trouble when applied to the real world. I guess
even a fictional AI researcher would know that and try to create a
more resilient system. But that would have missed the IMHO important
message of the story.
I think the important point about the story was that it showed how
"friendliness" can go wrong, not in the traditional "evil AI goes on
rampage across the world" sense or the equally tranditional "nice AI
creates an utopia but humans have no meaning without struggle" (which
it mimicks on the surface). Here the issue is that it is a too rigid
friendliness: yes, it will protect people and give them what they
want, but in an inflexible way.
Compare PI with (say) a Culture Mind. Would a Culture Mind have erased
alien civilizations or left people locked into themselves? No, it
would have intervened according to some ethics, but would also have
been able to update that ethics had it been given good arguments that
it had been wrong or if the conditions change. The great tragedy in
the story is that PI is a rigid automaton that is unable to manage
systems which are not, but through accidents becomes controller of all
of reality. So either it remakes all in its image (but that is not
really allowed to it), or it fails. It is a lose-lose situation for
literally everyone.
As a sysop sceptic, I think the story has some merit.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2.1.5 : Thu Feb 27 2003 - 06:15:52 MST