Re: SPACE: Cassini Mission Consequences

Richard Plourde (
Mon, 22 Sep 1997 00:26:53 -0400

At 10:17 AM 9/21/97 -0600, wrote:
> From: Damien Broderick <>
> Subject: Re: SPACE: Cassini Mission Consequences
> Date: Sun, 21 Sep 1997 22:11:33 +0000
> To:
>At 12:19 PM 9/20/97 -0800, Amara wrote:
>>My personal opinion is that the antagonists are using this as a
banner for
>>their personal causes. I honestly don't know why it has carried
so far.
>>My scientist friends are perplexed also. You may be interested
>>in a "back-of-the-envelope" calculation that Jeff Cuzzi
>>to demonstrate that the Plutonium risk is pretty small.
>If I have understood this risk evaluation, we learn the following
bottom line:
>There seems to be a one in a million chance of a plutonium
>accident into the biosphere, which would be the direct cause of
100 to 500
>deaths during the next half century.
>Despite the waffle about this being the same odds of a billion
people dying
>in a dino-killer asteroid impact, we have no control over an
asteroid and
>every control over Cassini.

Not necessarily true.

For example, in 1997 we have no control over a killer-asteroid.
If, on the other hand, we had continued developing
out-of-earth-orbit space craft in 1970, then very possibly we
would have control over such a killer asteroid.

A choice to curtail knowledge-development at any particular point
in time has the consequence of having that knowledge not available
at a later point in time. We just don't notice, because we tend
not to consider our environments today a consequence of our
choices yesterday. And, when we do consider consequences, we
generally consider the consequences of activity as carrying more
'responsibility' than the consequences of passivity.

The difficulty with such biases in our unconsidered philosophies
becomes apparent when we recognize that the 'ideal'-behaviors for
a human, based on a proposition of responsibility-free passivity,
comes out as something startlingly similar to the behaviors of a

Now, the question comes up, "How will the launching of some rocket
improve our knowledge to the degree that we would, at some time
when that knowledge becomes necessary, have the knowledge to turn
aside a killer asteroid?"

The answer, very simply, is, "we don't know." We *do* know,
however, that we do not have the knowledge or the tools to stop a
killer asteroid today.

We don't know *what* knowledge will come out of any particular
scientific experiment. Knowledge generally grows when what we
measure differs from what we expected to measure. An experiment
that yields exactly the anticipated results works to build
confidence in a theory -- but an experiment that yields an
unexpected result can have, as a consequence, an explosion in
knowledge where we have no way of predicting what direction our
knowledge will grow.

Apparently killer asteroids have hit the earth in the past.
Asteroids (and comets and etc.) continue to exist in the solar
system, and to intrude on our space. We do not have sufficient
knowledge to deal with them. If one hits, then we're not talking
about 500 people; we're talking, very possibly, about the
survival of higher life-forms, including us, on earth.

Now, do we *really* want to take a chance with the survival of the
entirety of mankind? Granted, it's probably only a million-to-one
chance that the knowledge gained from this particular mission
would make the difference, but can we afford to take even that
kind of a chance with the entire future of humanity at stake?



Whenever indulging in a non-quantified rhetorical argument,
remember that the lack of quantification generally allows an
inversion of the rhetoric. When we speak nonsense, we invite
nonsense. Particular forms of nonsense include "zero risk"
arguments; zero-risk can only occur when you decline to evaluate
all possible risks. Zero-risk does not come from our actions, but
from the ways we present our arguments. When we develop our
linguistic skills to the point where we can rule the tide to
change, and, in response to our words, the tides change, then we
might accept such propositions as valuable. Otherwise, I think it
best to accept them as trivially faulty.

The rhetoric I offered calls for irrational evaluation as much as
the original rhetoric. Rhetoric doesn't serve if we want to make
sane decisions. Those sane decisions don't necessarily always
come out right -- but we have better odds when we look at what we
can find, calculate probabilities, determine what *most* *likely*
will get us from where we are to where we want to be, and
recognize that fears and ignorance are not the stuff of effective


Richard Plourde ..

"The word is not the thing, the map is not the territory"