Re: Moral Complexity, Moral Efficacy Was: Moo/Boo!

Peter C. McCluskey (
Sun, 15 Feb 1998 18:56:56 -0800 ( writes:
>On Tue, 10 Feb 1998, Peter C. McCluskey wrote:
>> Increasing those other features you mentioned may tend to cause an increase
>> in complexity, but that doesn't make complexity valuable in and of itself,
>> nor a good indicator of the others.
>As I mentioned before, there would seem to be a tradeoff between
>decisiveness and openness that I would want to take into consideration
>drawing the line as to just how complex I want my ethical deliberations to
>be. Nevertheless I have a strong prejudice in favor of complexity. We're
>probably talking a little past one another here. To me, "simplicity"
>taken as a value in a moral system codes usually as intolerance and lack
>of imagination.

Under most conditions when society is working well, lack of imagination
about moral codes is good because the rules tend to be close enough to
optimal that random changes are more likely to hurt than to help.

To the limited extent that simplicity and intolerance have any correlation,
it sure looks to me like it's a negative correlation. The genealogical
contortions needed for racists to keep their rules from being subverted
by interbreeding is hardly as simple as the rule that all humans have
equal rights. Until you mention some examples of connections between
simplicity and intolerance, I will be inclined to assume that your claim
is based on inadequate stereotypes of bigots as being simpler because they
are less educated.

> Want I want from ethical life are strategies that will
>put me in a position to appreciate and thrive in as many situations as
>either chance or my own design can contrive to throw at me. What seems to
>me to be wanted from ethics is richness more than simplicity.

You are clearly using "ethics" to refer to a much broader set of rules
than I do.

>I would distinguish an individual moral code (an esthetic/prudential style
>of living and individual meaning-making), from a social or political civil
>code. I think I've mentioned before that although I pretty strongly
>advocate vegetarian practices as individually broadening, I don't think
>vegetarian sensitivities should be mandated at the level of law. This is
>partly for the reasons you mentioned above. Still, even on your own terms
>it seems to me that simplicity isn't a *self-evident* value here. Doesn't

Very few values are self-evident.
Some more hints about why simplicity is good:
Occam's Razor.
Given 2 pieces of software that can accomplish the same tasks, would
you prefer the more complex or the simpler?

>it sometimes conduce to the benefit of social stability for a moral code
>to institute wiggle-room and ongoing contestation of norms? Sometimes
>it's good to make moral consensus a difficult thing to achieve. Sometimes
>its good to keep the dividing lines between the moral and immoral pretty
>muddy (as when a clear "us" and "them" underwrites genocidal rages for
>order and purity).

Good point. I need to think some more about how this relates to my other
desires about moral systems.

>Animal rights discourse seems intriguing to me, but pretty flawed. As a
>rule I simply try not to inflict pain knowingly and unecessarily on beings
>capable of experiencing it. It does seem to me profoundly disrespectful
>to recognize that the pain experienced by the beings we instrumentalize as
>food (etc) is real but simply doesn't *matter*. I want to respect as wide
>a range of beings as I can.

Making the ability to experience pain an important basis for respecting
rights bothers me because the difficulty of figuring out what beings
dissimilar to us experience makes it easy for people who rely on this
rule to rationalize almost any treatment of those beings that happens
to be convenient. How does your moral system handle these:
- fish
- clams
- humans genetically engineered so they say they feel no emotional
reaction to things that would be painful to us
- uploaded humans
- AIs
These last three are probably the areas where different moral systems will
have the most important differences in results in the next century.

My idea of a moral rule that can handle these better is based on the
ability to agree to respect each others rights. However, I'm still
dissatisfied with the difficulties in dealing with beings who can't
communicate well with us.

> One of the negative consequences of the line between human and
>nonhuman animals being so obvious to most people is that it creates a
>general category of beings whose pain doesn't matter to us ethically, a
>category that seems to attach pretty promiscuously to other beings whose
>pain we would prefer not to bother too much with. It is a matter of
>record that justifications for misogynist or racist practices often
>(almost *always*) make recourse to the theme of the so-called subhuman or
>bestial nature of the people it dismisses. Muddying these categorical
>waters and so depriving this rhetorical tactic of its sting would possibly
>be a salutary thing.

I don't know. How likely is it that people who are open to racist rhetoric
will also be open to adopting your ethics in a consistently principled way?
Where do racist vegetarians like Hitler fit into this?
My impression is that xenophobic intolerance rarely depends in any important
way on classifying others as subhuman. It appears to be largely an
evolutionary adaptation for creating tribal unity that is most powerfull
when the objective differences between the tribes are smallest (i.e. when
the danger of tribe members defecting is largest). Hatred towards, say,
mosquitos, seems to be much more subdued than hatred between Catholics
and Protestants in Northern Ireland.

> If I were in a position to argue with a Power who was on the verge
>of using the population of Poughkeepsie as ubergoo feedstock for some
>transhuman construction project, I would say that respecting diversity has
>the consequence of opening up an unforseeably richer range of pleasurable

That might work for some harmfull things a Power would be tempted to do,
but would be couterproductive in other circumstances. What about the
Power who wants to have fun by watching how humans respond to random
disturbances it adds to human society? A virus here, some random bits
flipped there, could give it a richer understanding of nature than
mere passive observation. I'd rather aim for convincing it to follow
a rule which implied respecting our rights.
Also, a Power will probably know enough that the example you set by
your vegetarianism is very unlikely to convince it to change its moral

>experiences (I for one don't think of my vegetarianism as a limiting or
>ascetic lifestyle as many seem to), as well as providing a robust and

Vegetarianism wouldn't be much of a limitation on meals I eat alone,
but when eating out with others the costs of determining which foods
meet vegetarian standards and altering behavior to follow those standards
consistently seem enough that it would take a compelling argument to
persuade me to adopt vegetarianism.

>resilient cultural system better able to fend off dangers unforseeable to
>even such a Power. I see my vegetarianism as a dress rehearsal for Power
>ethics to come. (I'm sorry to hereby inflict the list with more "My
>Little Pony" extropianism.)

Huh? You're sorry for being interesting?

> Anyway, I agree with you that the boundary
>does indeed seem to be one of personal choice. I wouldn't say it is
>"just" personal choice, since this suggests that stronger injunctions are
>available, when ultimately I suspect they are not. Peter, thanks for the

I expect stronger injunctions for rules that I have confidence in as
moral systems. For instance, I normally demand that they be evolutionarily
stable, and that people willingly accept them.

Peter McCluskey  |  | Has anyone used           | to comment on your web pages?