"Robert J. Bradbury" wrote:
>
> Though I liked John's explanation, I think it is a bit incomplete.
> Eliminating "death" will not eliminate a need to explain "why the
> universe is the way it is?", or "why does evil exist?", or "what is
> the purpose of life?".
>
> Just because an AI may be "effectively" immortal [within the limits
> of its hardware or civilization], does not mean that it will not
> seek out answers to the questions I've listed above or even a
> more general moral/philosophical system to determine how to make
> value judgements and resource allocation prioritizations.
>
> Whether an AI could become infected with a virus that is dangerous
> to human lives or is unextropic in some way is a serious issue
> that should not be discarded so simply. As I have stated before,
> my greatest fear is an amoral AI that recognizes itself as a separate
> species, which feels no loyalty/kinship for the human species that
> has the capability to plan and execute acts of terror and destruction
> such as those we have witnessed this last week.
>
> Looking at it from the framework of the "Extropian Principles", there
> will at some point be a conflict between principles 6 (self-direction)
> and 7 (rational-thinking). Anyone whose self-direction seeks to preserve
> themselves in some unevolved sub-optimal state, is clearly in conflict with
> the perpetual-progress/self-transformation/rational-thinking principles.
> Resolving that seems to require an infringement on the self-direction
> principle.
>
It sounds to me like we may need to extend the Principles to
have a clearer right to self-determination and non-interference
for all sentients.
- samantha
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:49 MDT