Re: Why would AI want to be friendly?

From: Samantha Atkins (samantha@objectent.com)
Date: Sun Sep 24 2000 - 17:32:38 MDT


"J. R. Molloy" wrote:
>
> Zero Powers asks,
>
> > What makes you think we'll be able to terminate a being which is orders of
> > magnitude more intelligent than we are? And even if we could, what makes
> > you think AI will be bribable? Why should it *care* whether it is
> > terminated? Particularly when its existence consists mostly of slave labor?
>
> First of all, AI does not equal SI. The question of the thread is "Why would AI
> want to be friendly?" not "Why would SI want to be friendly?"
>
> AI will grow in intelligence just as any entity (including a human) does. As
> machines (our Mind Children) become more and more intelligent, those that
> display unfriendly characteristics (unlike human children) can be terminated by
> deletion.

If the AI is in fact as much of an autonomous intelligent being as you
yourself are then deletion of the AI is just as much an execution/murder
as is the same act performed on you or I. If we insist on treating
beings as bright and adaptive as ourselves as things that will
percipitate a human-AI conflict very quickly.

- samantha



This archive was generated by hypermail 2b29 : Mon Oct 02 2000 - 17:38:49 MDT