Though I liked John's explanation, I think it is a bit incomplete.
Eliminating "death" will not eliminate a need to explain "why the
universe is the way it is?", or "why does evil exist?", or "what is
the purpose of life?".
Just because an AI may be "effectively" immortal [within the limits
of its hardware or civilization], does not mean that it will not
seek out answers to the questions I've listed above or even a
more general moral/philosophical system to determine how to make
value judgements and resource allocation prioritizations.
Whether an AI could become infected with a virus that is dangerous
to human lives or is unextropic in some way is a serious issue
that should not be discarded so simply. As I have stated before,
my greatest fear is an amoral AI that recognizes itself as a separate
species, which feels no loyalty/kinship for the human species that
has the capability to plan and execute acts of terror and destruction
such as those we have witnessed this last week.
Looking at it from the framework of the "Extropian Principles", there
will at some point be a conflict between principles 6 (self-direction)
and 7 (rational-thinking). Anyone whose self-direction seeks to preserve
themselves in some unevolved sub-optimal state, is clearly in conflict with
the perpetual-progress/self-transformation/rational-thinking principles.
Resolving that seems to require an infringement on the self-direction
principle.
So if an AI develops to a level where it is clearly superior, then
invoking principle 7, it makes perfect sense to recycle humans into
more optimal forms. If we are being rational about it, we would
be queuing up at the recycling stations.
Robert
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:48 MDT