Re: "Morality?" - Composite Reply

Max More (maxmore@primenet.com)
Mon, 13 Oct 1997 00:16:44 -0700


At 11:55 PM 10/12/97 +0000, Nicholas Bostrom wrote:
>Max More wrote:
>
>> If there really were
>> vampires, and there survival really did depend on drinking the blood of
>> humans (in such a way that it killed them and there were no alternatives
>> like blood banks) then it would be *right* for vampires to attack humans.
>> It would also be *right* for humans to defend themselves. Differences in
>> nature produce different behaviors that are good for the beings that do
>> them. Given that humans have essentially the same nature, such moral
>> divergence is unlikely.
>
>Just to make sure I understand you correctly, is the following a
>correct interpretation of the above passage?
>
>Even if all humans were identical, it could still be the case that
>one human's deepest value is irrelevant to another. For example, if
>each is perfectly egoistic, then Mr. X's deepest value might be the
>flourishing of Mr X; whereas Mr. Y only cares about Mr. Y etc. The
>obvious sense in which you could say that these egosists have a
>common moral is that they could produce a set of norms that would
>apply to them all, such as "Don't steal! (Because if you do, the
>police will get you.)". I presume the point with the vampire example
>is to give an example of how there could be creatures with
>sufficiently different goals or abilities to make human morality
>irrelevant to them.

Right. Although your last statement does not represent my position quite
correctly. The point is that the vampires would have no reason (in the
situation as I've set it up) to refrain from killing humans. It would be
pointless for humans to tell vampires that they were being immoral. What
was moral for vampires might be immoral for humans to do to each other. But
saying that human morality is irrevelant to them is too strong. Many moral
principles and virtues that apply to humans might apply to vampires.
Courage, for example. Benevolence and mutual aid might even be rational
between vampires.

>If this is right then what distinguishes moral knowledge from
>other knowledge? Is it just that moral knowledge typically concerns
>life strategies or codes for interacting with other humans? Would
>"Take out an insurance!" or "Con thy neighbor subtly!" count as a
>moral imperative for humans, supposing that it would be good advise
>for most people (i.e. that each would better obtain her own values if
>she follows it than if she doesn't)?

I don't see moral knowledge as differing fundamentally from other
knowledge. I see moral knowledge as being more difficult to come by and
more difficult to test than scientific knowledge. But that's also true of
economic, sociological, and historical knowledge. Reaching firm conclusions
in ethics involves complicated reasoning about human psychology (and how
much is inborn, how much acquired, and how much alterable), and the effects
of various types of actions. (Game theory may sometimes help with the
latter. I like David Gauthier's discussion in Morals By Agreement, but I
find it to be only part of the answer.)

Approaches to ethics that treat it in isolation from other areas of
knowledge seem to me doomed to produce ethical systems that cannot
adequately answer the ancient and excellent question "Why be moral?" I find
the most promising approach to be that of virtue ethics with its integral
concern for moral psychology. A fairly good collection on this is Flanagan
and Rorty's Identity, Character, and Morality.

To specifically address your proferred moral imperatives: "Take out life
insurance" (if good advice for most people) could be taken as a moral
imperative. However, I think of ethics primarily in terms of the virtues of
character for a successful life and accompanying principles. "Take out life
insurance" looks like a specific, highly context-dependent application of
virtues and principles such as personal responsibility, rationality, and
foresightness. "Con thy neighbor subtly" again seems highly
context-dependent. I see ethics as involving a flexible hierarchy of
virtues and principles. Some are pretty secure and very broadly applicable.
Purported imperatives like "con your neighbor subtly" seems to be much
further down the list of derivations. That is, they are many many
circumstances that could make that a bad bit of advice.

As another example: I would argue that the virtue of self-ownership (which
includes personal responsibility, rationality, indepedent thinking, and
self-direction) is a more basic value than truthfulness. While I believe
truthfulness to generally be virtuous it depends far more on circumstances
than does self-ownership. If I were living in Stalinist Russia, I might
find it moral to lie in many situations.

Again, all rational moral discussion depends on shared basic values.
Perhaps we can go further, but I'm not sure at this stage whether I can
rationally persuade someone fundamentally bent on self- and
other-destruction and who explicitly rejects survival, happiness, and
flourishing as goals. I'd like to think that I could, in principle, show
their attitudes to be based on flawed factual beliefs and poor reasoning. I
suspect this *is* possible -- and so rational ethics can become even more
close to universal, but I will not claim this at present.

Max

Max More, Ph.D.
more@extropy.org
http://www.primenet.com/~maxmore
President, Extropy Institute: exi-info@extropy.org, http://www.extropy.org