From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Sun Jul 20 2003 - 11:58:30 MDT
On Sun, 20 Jul 2003, Hubert Mania (responding to Harvey) wrote:
> In some extreme cases I guess it is appropriate to attack someone
> personally, so she or he gets a chance to reevaluate his/her thoughts.
> Whenever it comes to the support of killing people for any cause whatsoever,
> except for oself defense I think I even have the obligation for a personal
> attack.
In this case I would have to side with Hubert -- there are extreme
cases where one has to whack someone in the side of the head to get
them to "reevaluate his/her thoughts" (myself included).
But it seems likely that it is better to wade into the discussion
gracefully rather than resort to A.H. from the git-go. [I believe
there are several examples of such graceful entries in recent exchanges
with respect to my rather blunt proposal.]
> When Robert Bradbury made his sick nuking proposal the other day, he did
> express a monstrous, ultimate example of the ugliness of utilitarian
> philosophy, moral depravity and mega-fascism: wiping out a huge group of
> humans for his beloved fetish, called technological advance.
Yep, I'll plead guilty, on some days I'm a hard-core extropian --
i.e. we either advance civilization as fast as we possibly can
(complexification) or we might as well write it all off now as
a pointless exercise (odds are given our current understanding
of physics we all end up dead anyway) -- so all you are arguing
for is the extension of an entirely futile existence for billions
of people who happen to think they have "free will" (realizing
of course that their experience of "pleasure", "pain", "free will",
etc. has been dictated by a random set evolutionary accidents.
Evolution doesn't care whether humanity survives or becomes extinct.
If you are going to propose that we should "care" about humanity
then you need to explain what parts of it we should save and why.
If I walk up to you on the street, pour gasoline all over you
and light a match tossing it onto your clothing resulting in
your rather painful death -- are you going to seriously argue
that the small subset of "humanity" (that I represent) should
be preserved?
I will plead guilty to engaging in a "glass bead game" -- the problems
with which I think were dealt with very well in terms of looking at the
time value of current vs. future lives. I don't think the problem is
resolved, but I am comfortable where more work is required.
With regard to functioning as a fill-in for "Hitler, Goering and Goebbels"
you will have a hard time making such labels stick.
The reasons for this are as follows:
a) I have no agenda of my own to push (the only agenda I am looking at is
the extropian agenda which I view as a possible path for the maximization
of the longevity of intelligence in the universe);
b) I don't strongly care if I survive.
Given (b), I should be viewed as a Palestinian terrorist -- my survival is of
little importance if a state is established and flourishes. (In my situation
the "state" would be one that allows the greatest complexity and longest
survival of the entities creating such complexity in the universe).
PLEASE NOTE: I am not suggesting an extropian state should be established
or what the policies of extropians should be. I *am* suggesting that if
humans that are alive now seek the long term survival of "humanity" that
they may need to expand their thinking.
Anders grasped the problem -- it is one of "triage" -- one that is dealt
with by physicians from time to time (perhaps more often than they would
like). Like it or not -- there are people out there who are making daily
decisions with respect to who will live and who will die. He also hit it
on the head when he cited the "present value argument" -- we do not have
an effective way of assigning a value to a life now (given its influence
over the next 100 years) vs. the value of a life a century from now (given
its influence over 1000 years or even a million years). Given this problem
it is hard for me to compare 10^8 current human lives to 10^14 human lives
(per second). I would suggest that human minds are simply unable to grasp
what would be a loss of something like 10 trillion human lives (per century)
because even "Hitler, Goering and Goebbels" didn't think as big.
One does the discussion a disservice when one simply faults it rather
than identifying:
a) when it is reasonable to terminate a human life;
b) when such termination might support extropianism (e.g. non-entropic
directions -- since (a) is probably a mix of entropic and extropic vectors)
c) productive human directions that rely on the complexity of tradeoffs
(particularly re: bioethics debates as people at the WTA and Anders
have suggested) -- we may probably agree that such discussions
have been hijacked for political purposes at this time -- but that doesn't
negate the long term guidance and usefullness of such discussions.
Robert
This archive was generated by hypermail 2.1.5 : Sun Jul 20 2003 - 12:08:12 MDT