Re: ETHICS: value-sets and value-systems

Mitchell Porter (mitch@thehub.com.au)
Tue, 21 Oct 1997 23:06:59 +1000 (EST)


Felix Ungman wrote

Mitchell Porter:
>but it does imply that there are no facts of the form
>"Morality A is better than morality B", only facts of the
>form "Morality A is better than morality B, according to
>morality C".

Not true, I think. If a number of individuals share the same,
or at least similar, value-system, that morality implies
behavioural dynamics. Morality A may have game theoretically
stable points at "better" value-sets than Morality B. So even
though the value-sets may be higly subjective, the value-systems
are not.

The most obvious way a morality A can be better than morality B
is the amount of synergy it promotes.

To say "A is better than B because it promotes greater synergy",
you have to say synergy is *good*, and I claim that that statement
makes no sense except in the context of a morality or "value-system".
It may be objectively true that, say, A will outcompete B, but
again (I claim) this has no intrinsic moral significance, any more
than the fact that, say, the statement of A's principles appears
earlier in the binary expansion of pi than the statement of B's
principles; because there is no such thing as "intrinsic moral
significance", only significance in the context of a particular
value-set or value-system. I am not at all sure that the vague
notions of "value-set" and "value-system" are the right way to
analyse the phenomenon of choice - I would prefer concepts that
were grounded in empirical neuroscience, and not just my
personal version of folk psychology - but I'll stand by the
general argument, that "absolute good" is probably as unreal as
"absolute simultaneity".

Nonetheless, the sort of objective ranking of value-systems
you describe is highly relevant to anyone who's interested
in choosing a value system, or even in altering their
value set, and who is a "rational valuer", in Max More's sense.
But to get the most out of it, one should probably first
figure out, as completely as possible, one's "value set" -
the "primary values", the things that are ends in themselves.
Is survival, or freedom, or novelty an end in itself?
If happiness is an end in itself, does it have more than
one form? Do some forms matter (again, to YOU) more than others?
If you can't rank one form of happiness above the other, perhaps
you should be indifferent when faced with a choice between one
form and another form (knowing this could save you time one day,
since you'll know you don't need to spend time choosing). But
what about a choice between having just one, and having both
at once? If you prefer to have both forms of happiness at once,
does that mean that "having as many forms of happiness
simultaneously as possible" is a more important value for
you than either particular form? And so on. :)

I'm intrigued by Eric's idea that "the summum bonum is
the generalizably extrinsic". I don't think it's any more
defensible as a proposed Absolute Good than any other candidate,
so I suppose I find it of interest as a psychological hypothesis
("this is what we're really after") and as a hedonic heuristic
("this is the thing to seek"). In fact, maybe so much emphasis
is placed on nano and AI in extropian/transhuman circles
because they promise to make so much else possible, and are
therefore further examples of powerful extrinsic goods.

-mitch
http://www.thehub.com.au/~mitch