Re: TERRORISM: Is genocide the logical solution?

From: Mark Walker (mdwalker@quickclic.net)
Date: Mon Sep 17 2001 - 12:24:03 MDT


Robert has taken a lot of grief for this post including being charged with
the worst crime: engaging in "pro-entropy" (Anders) activities. With some
reluctance I feel compelled to defend the logic of what Robert is saying
because I think he raises an important question, namely: what are we willing
to sacrifice now in terms of lives to advance the effects of the
singularity? I take it that this is the more general form of the question
that Robert is asking when he writes:

> >From a rational position, *if* the case can be made that the
> Afgani position & politics is likely to result in the diversion
> of resources and delay the development of the technologies we anticipate
> developing by more than 6 months, then a plan of genocide to
> bury the country in rubble seems justified.

Anders made one of the most considered responses. He says Robert's position
is based on an ethical mistake:

>The
> ethical is most serious: you assume that human lives can have
> negative value, and do an essentially utilitarian calculation not
> of happiness, but simply of utility towards a certain goal. Humans
> not promoting this goal are a vaste of resources and a potential
> threat, so eliminate them.

First off, I don't see where Robert assumes that humans are a vast resource,
rather, he seems to be asking us to trade off some lives now for many more
live in the future. Furthermore, Anders claims that Robert has made an
ethical mistake, but this is a controversial claim in ethics. Robert assumes
a consequentialist position, i.e., he assumes that the right act here is the
one that has the best consequence, namely, the one that saves the most lives
in the end. Anders, as far as I can tell, expresses the deontological view
which says that some actions are morally obligatory regardless of their
consequences:

> The core idea of transhumanism is human development, so that we
> can extend our potential immensely and become something new. This
> is based on the assumption that human life in whatever form it may
> be (including potential successor beings) is valuable and worth
> something in itself. It must not be destroyed, because that
> destroys what transhumanism strives to preserve and enhance. Even
> if some humans are not helpful in achieving transhumanity doesn't
> mean their existence is worthless, and if they are an active
> hindrance to your (or anybody elses) plans destroying their lives
> is always wrong as long as they do not initiate force against you.

A simple test for figuring out whether you are a deontologist or a
consequentialist is to ask the following: Is it always wrong to take an
innocent life? If you are a deontologist about this matter then you must say
that it is always wrong to take an innocent human life no matter what the
consequences. A consequentialist, in contrast, will weigh the consequences
of this action. If by killing one innocent person you could save two lives
would you kill that person? How about if killing that innocent person saved
10,000 lives? A consequentialist will say that, "yes, it is morally correct
to kill the one innocent person". (Myself I am a consequentialist, it is the
worse moral position besides all the others. Hopefully, with the singularity
we will not have to choose between the deontological and the
consequentialist positions).
    With this distinction we can see that Anders may have been a bit swift
in pulling the "fascist card":

>
> A transhumanism not based on this core does not deserve the
> humanism part of the name. And a transhumanism that bases itself
> on the utilitarist approach of using humans as means to an end
> rather than ends in themselves becomes the ideological twin of
> fascism, with the sole difference that the singularity takes the
> place of the national community in the ideological structure.
>
To make this charge of an "ideological twin" stick Anders would need to show
at least (a) that transhumanism is necessarily deontological in structure,
and (b) that Robert intended the consequence to be weighted is that of an
ideal, namely, the singularity, rather than the lives that the singularity
will save. However, (a) is an open question in my mind and, as I've said,
Robert's discussion seems predicated on the assumption that the singularity
will save a great number of lives. That is, it is to misunderstand (or not
read carefully) the form of Robert's argument: the singularity is the means
to the end of saving lives, it is not that sacrificing lives are the means
to the goal of the singularity. Thus Robert writes:

> We also know, from calculations that I and Eliezer (independently)
> have done, that the annual cost between where we are now and the
> full manifestation of what we expect is feasible is of the order
> of 50 million lives per year.

I guess there is a sense in which I agree with Robert for in effect he is
saying that if we know that by sacrificing a certain number of lives today
we can save many more tomorrow. Notice that Robert did not say that he knows
that the antecedent is true, this was one of the questions he was raising.
To repeat, he says:

> >From a rational position, *if* the case can be made that the
> Afgani position & politics is likely to result in the diversion
> of resources and delay the development of the technologies we anticipate
> developing by more than 6 months, then a plan of genocide to
> bury the country

It seems that we are way too ignorant of the consequences of our actions to
know that the antecedent of this conditional is true, nor are we ever likely
to know it is true. We don't know whether such actions might result in a
global war or that in so doing we might be killing a person who actually
makes a significant breakthrough to real AI. Also, we must admit that we do
not know that the consequent will obtain either. We do not know whether a
singularity is possible, even with all our resources devoted to it, and
whether it will really save lives. (Can we really be sure that a
postsingularity superintelligence might not think that it is best for
humanity not to possess technology and not to be uploaded for the same sorts
of reasons we do not think we need to upload are pet goldfish? Perhaps, the
superintelligence will reason that humans have the best lives when they are
simple hunter/gathers. Yikes!) Obviously, we hope and believe this is the
case, but I for would not be willing to sacrifice millions of lives given
our tremendous ignorance about these matters.
    Having said this, I think that Robert might have been able to raise
these sorts of questions in a less inflammatory way and I wish he had done
so.
Mark.



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:40:50 MDT