Re: Premature deaths [was: extropians-digest V7 #4]

From: Samantha Atkins (
Date: Mon Jan 07 2002 - 04:59:12 MST

Robert J. Bradbury wrote:

> On Sun, 6 Jan 2002, Tom Andrys wrote:

> Now, from an extropic perspective, I would argue we should seek
> the "greatest good" -- the survival of the most individuals.
> (One could also argue that one wants to promote the survival of
> the "best" or most creative individuals -- but that gets into
> some *extremely* subjective subject areas and potentially even
> eugenics so I'm not going to go there.)
> I'll simply state for the purposes of this discussion that "more"
> individuals are better than "fewer" individuals because it increases
> complexity and that is extropian. (Those of you reading this may
> note some subtle frictions between extropianism and transhumanism
> when considering whether trans- or post-humanism at least along some
> paths is likely to generate the greatest complexity (extropicness),
> i.e. you could have two "post-human" situations, equally "complex",
> one of which involves a large number of individuals and one of which
> involves a single meta-mind.

A question is whether more truly increases complexity, truly
increases variety or mainly increases amplitude of all the
various "signals" humanity puts out.

> So, from a rational standpoint (given the above premises) the goal
> should be to minimize premature death. We all have finite resources
> (time, money, etc.). So the question becomes how to maximize
> the minimization of premature death given our limited resources.


> If the answer is yes, then we cannot fault recent attempts
> at creating "good days to die" (other than to object to the fact
> that "we" (Westerners) were the ones dying. If the answer is no, then
> we have to make the case that living and surviving is of much greater
> value than (a) the "stories" of ones life (e.g. the Klingon perspective)
> or (b) what one has contributed during ones life (a somewhat human
> perspective considering contributions of Einstein, Feynman, Darwin,
> etc.) [I think this translates into the concept that after one
> ceases to make "significant" contributions, ones life is devalued.]
> { Side-bar: This may be a quite interesting concept as it involves
> the concept of the depreciation of human lives. I know some (the
> humanist gaggle) might object to this but from an extropian perspective
> it may have merit -- if you aren't "contributing" to the society
> what rights do you have to "occupy" a place in it. }
> (You know -- survival of the fittest and all that...)

So, when AI gets to the point that many of our intellectual
contributions are paltry and worthless in comparison, should we
quietly go into that night? What should happen when we can
"contribute" no longer without a massive upgrade? What happens
if we think perhaps that we won't be ourselves if we take the
upgrade as some inevitably will think? Should these simply die
and be deprived of all the goodies too?

> Returning to the issue -- how does one prevent premature deaths
> at the lowest cost? I am asserting that *IT DOES NOT MATTER*
> what caused those deaths. What matters is the relative costs
> of reducing such deaths. Taking an alternate position means
> one must argue the relative "acceptability" of deaths. E.g. *These*
> deaths are bad and should be prevented while *those* deaths, well,
> they are not so important and we will choose to ignore them.

So if you can only decrease one set of many deaths and even the
decrease may potentially lead to other deaths then what do you
do? It is a very slippery slope we stand close to, leading
straight to many ideologically motivated hells on earth in the past.

> So to maximize the extropic vector, extropians should be asking
> the question "How do I save the most lives at the lowest cost?".
> (Or alternatively, stepping deeply into the swamp, make the
> case that some lives are more valuable than others -- and
> suggest policies that promote saving those "specific" lives.)

Even at the express cost of other supposedly "less valuable"

> I will go on record here (and probably be crucified for it)
> that I do believe that some lives are inherently more valuable
> than others. There *are* some lives that *will* save more lives
> than they destroy (e.g. Winston Churchill vs. Josef Stalin).
> But I also question whether it is possible to identify the lives
> with greatest value in advance with sufficient certainty to be
> "picking" them. So one is rolling dice when one attempts to
> follow this path.

I think we get into serious trouble very quickly unless we at
least act as if we believe in the inherent worth of all human
lives. For if we do not, then it is all too easy to justify
misery and even death for some in order to maximize the safety
and lives of other "important" ones. The inherent worth
argument is not perhaps altogether logically supported but it
does help avoid some very dangerous thoughts and policies as
long as it is held.

> But, *if* one can come up with reasonable statistical measures
> that suggest that some political paths or leaders will save more lives
> than they eliminate -- then I would argue that in such situations
> that militant actions, even terrorism, in support of such paths
> are potentially justified.

Here you have stepped fully over to the Dark Side! :-(
Once the possibility that what one believes in most is most
maximized by killing others is acknowledged then there are
relatively few checks on the amount of misery that might befall
humankind from the logical, oh so logical, extensions of and
working out of this idea. Why, if only a few hundred thousand
humans survive all the way to say creating a Sysop (not to pick
on that scenario at all) then one could argue that that is
infinitely better than letting all those billions live who were
bent on interfering to the point where the extinction of all
humanity was to high a risk. Or could one?

> In response to my arguments that early on, American revolutionaries
> were "terrorists", some have claimed that they only assaulted
> (in relative terms) "military" targets. In response, particularly
> to those arguing the fine points of the Geneva Convention, I must
> propose -- if one feels one will never "triumph" without taking
> the "pain & suffering" to the general population opposing your
> position -- is not the use of such tactics justified?
> Barbara Steisand, in Yentl, sang a song "Papa, can you hear me?".
> It would appear to me that this question is one that must be answered
> definitively in the west -- we must answer both those people currently
> disenfranchised as well as those who might someday be (from the Fukuyama
> posthumanist perspective).
> It seems necessary that we remain aware of the need to maintain
> a dialog. Once people have concluded "they are not heard", it is but
> a small step to ignore any "civilized" protocols as to how "conversations"
> should be conducted.

Yes. I agree

> With regard to the prevention of premature deaths it becomes a
> question of whether listening (and responding) is less costly
> (from an extropic perspctive of preventing premature deaths)
> than proceeding on our along previous paths that assumed we were
> relatively secure. Another way of looking at this is -- will
> each $ spent on "Homeland Security" save more or less lives
> than each $ given to the NIH (or the DoD, or the NSF, etc.).
> (The same discussion applies to other countries as well.)

Homeland Security does not impress me as being really about
"security". It strikes great fear in me because I believe its
principle use will be to control and oppress the people. The
changes of laws and procedures already passed and pending when
enforced as a "fight against terrorism" and thus almost
sacrosanct and with relatively little oversight and control, has
near limitless potential to be used to tyrannize the country and
to protect established power elites. If we want to maximize
human life and human potential we must maximize freedom. More
and more elaborate control will not do the trick. At this
critical juncture we cannot afford to spend a decade or two in a
global witchhunt. Do everything we can to find culprits and to
protect ourselves yes, but not at the price of removing the
freedom and access to information and free movement of goods,
people and ideas that the innovation to get us to Singularity is
so very dependent on.

- samantha

This archive was generated by hypermail 2.1.5 : Fri Nov 01 2002 - 13:37:33 MST