Re: Eugene's nuclear threat

From: Samantha Atkins (samantha@objectent.com)
Date: Mon Oct 02 2000 - 17:50:46 MDT


hal@finney.org wrote:
>
> Eugene writes:
> > What can be possibly more poisonous than the end of the world as we
> > know it(tm) and death of all or most human beings?
>
> You need to make several assumptions to get to this from the starting
> point of someone trying to develop a "wild" self-enhancing AI through
> evolutionary techniques:
>
> - that the process will work at all to get any kind of AI
> - that the AI won't hit a bottleneck somewhere on its self-improvement
> trajectory
> - that the super-intelligent AI will want to wipe out humans
> - that it will succeed
>
> and also, I should add,
>
> - that this will be a bad thing.
>
> Without belaboring this last point, I do think that there is a
> difference between seeing humanity wiped out by mindless gray goo,
> versus having the human race supplanted by an expanding, curious,
> exploratory new super-intelligent life form.

Excuse me, but we are an expanding, curious, exploratory and
super-intelligent life form. It would be much more "chauvanistic" to
claim that just because the newest intelligent life form think much
faster and so on that all former lifeforms should simply be junked as
clearly obsolete. That attitude might make sense for a full Borg
mentality but is that the type of mentality, the kind of ethics, the
goal state we want to produce? Because it is up to us to pick our
ethics, to pick the aim of this work going forward. For now at least.

All the dreams of all the generations of humans that were part of our
getting here ending in something that ends humanity forever rather than
transforming it - I do not consider that for one second to be a "good"
thing. It is as bad as it gets. If anyone thinks it is "good" to see
humanity destroyed rather than transformed then please explain to me
what your standard of the "good" is.

>If we step back a bit
> from our humanist chauvinism (and forgetting that we ourselves might be
> targets of the malicious AI), we generally do justify the destruction of
> less intellectually advanced life forms in favor of more advanced ones.

But is that a correct choice or the very evidence of chauvinism, short
sightedness and a lack of system level thinking?

> People do this every day when they eat.

Not in the same league at all. Some of us are or have been vegans simply
because eating other higher level lifeforms seems barbaric. There is
nothing stopping us a few decades hence from synthesizing all foods (as
long as we actually require them).

> From a sufficiently removed
> perspective, replacing the human race with an intelligence vastly more
> aware, more perceptive, more intelligent and more conscious may not be
> entirely evil. I don't say it should happen, but it is something to
> consider in evaluating the morality of this outcome.
>

You can only evaluate a morality within a framework allowing valuing.
What framework allows you to step completely outside of humanity and
value this as a non-evil possibility?

- samantha



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:14 MDT