True, but not of much use to me in determining a course of action.
The problem is that evolution only looks backward: I am the result of
those traits that succeeded in the past, under different conditions
For example, throughout most of human evolution, food was scarce, so
I have an instinct to eat whenever I can. I choose not to do so
(though I should choose not to a bit more often than I actually do)
because I am able to evaluate the evidence of obesity not being
favorable to my goals of health and long life.
Greg's "natural values" of life and society are more rooted in the
present reality, but that too is not the time frame I want to look
at: I want values to be teleological, to tell me how to make my
future different from my present. True, I must reason from those
teleological premises and my knowledge of reality /now/ to derive
rational actions, but still the nature of reality now is no guide
to making those moral choices.
Max's "rational values" aren't very satisfying either, because they
are still derived from underlying chosen values, but there is a sense
in which they are valuable: simplicity. Once someone goes through
the task of showing that some derived value--say, wealth--follows
from a core value of life, one can then use the derived value itself
in making decisions without having to re-derive it every time. Much
the way a Mathematician would use a finished proof as a premise for
more complex proofs, or the way a programmer would use completed
subroutines to build more complex programs. It is not even necessary
that one know the derivation itself; in fact not spending the time
to go through every such derivation would be advantageous--an example
of what Friedman calls "rational ignorance", where the cost of a piece
of knowledge exceeds its value.
The game-threory arguments fail to get us out of our dilemma as well,
because they are founded on the assumption of measurable payoffs, which
are arbitrary. "Find positive-sum games" is of little use to someone
who has chosen values that prevent it. Or for some, leads to completely
different results than we would prefer (for example, if you choose to
value the failure of others twice as much as they devalue it, then every
interaction in which you victimize someone is positive-sum to you, and
every cooperation is zero-sum).
Where I depart from Mr. England is that while we agree that we cannot
rationaly derive moral values, he then tries to derive moral values
from that failure--even more irrational. He says, in effect, that
since your morals cannot be rational, be less judgmental of others
and less committed to your own. I say choose one and get on with it.
I have no problem committing my life, my fortune, and my sacred honor
to the values I have chosen; it is precisely that commitment, I
believe, that enables me to accomplish the the things I want. And
since I need the cooperation of others to accomplish them, I will
work to spread those values in others, and judge them without apology.
I just refuse to claim any objective basis for them, because honesty
is one of those core values I have chosen.
I also have to agree with Max that this is pretty esoteric stuff,
irrelevant to everyday use. Since 90% of the actions of humans are
demonstrably irrational based on their own expressed values, there is
more to be accomplished by solving /that/ problem than by worrying
about the derivation of those values. The latter can be dealt with
in places like this list; the rest of life can deal with the former.
-- Lee Daniel Crocker <email@example.com> <http://www.piclab.com/lcrocker.html> "All inventions or works of authorship original to me, herein and past, are placed irrevocably in the public domain, and may be used or modified for any purpose, without permission, attribution, or notification."--LDC