> I think we're going in circles here. For an ethical theory to be true it
> has to compared either to the "ends" or a "universal truth" - -
>
> In order for there to be a universal right or wrong, we'd have to ask if
> ethics can be derived from the laws of "nature" (18th Century Philosophy) or
> "god". Then we could become natural talmudisists and begin implying things.
This is true. For this reason, I've been maintaining a consequentialist
rationality.
> On the other hand, if our ethical system is relative to our percpetion of the
> ends, then all ethical systems would be relative. Obviously the ethical
> system of an egoist vs. altruist would be quite different.
Not necessarily, and it's because of the generalization principle. If it
is rational for me to be an egoist, then it is also rational for you to be
an egoist. However, if we were both egoists, we would both be worse off;
this is bad, according to egoism and a consequentialist value system. So
what we find is that egoism fails to meet the requirements of
generalization according to egoism's own value system; for this reason,
egoism is fundamentally irrational.
It's like the Prisoner's Dilemma: egoism demands that we both incriminate
each other; utilitarianism demands that we both keep silent.
Utilitarianism provides better results, so it is rational, according to
consequentialism.
> One - - rationality as implied by an ethical system (This presumes that
> what is right is rational and what is wrong is irrational.)
This would be circular, as we are attempting to determine what is ethical
by deriving it from rationality.
> Two - - rationality as defined by the means vs. ends (This presumes that
> what works and leads to the "sum" is rational.)
If we define "well-being" as the "sum," then I think this is the
rationality which I tend to espouse.
> Three - - axiomatic rationality as defined by a system of universal truths
> (This presumes that the "answer" has already been given, and human logic has
> to conform to it, or be lead astray.)
Depends on what you mean by axioms. For example, some axioms really *are*
undeniable while remaining consistent; A=A leaps to mind. Indeed, I think
the generalization principle may be a similar axiom.
Is consequentialism undeniable? It may be. Consider another value
system, based on act types or motivations. What act types are good? What
act types are bad? What's a good/bad motivation? I can't even imagine a
value system which does not evaluate these in terms of their effects: that
"having a good motive" means that you intend to improve well-being, or
that a bad act type is one which reduces well-being. These are all
consequentialist value systems, however. What other sorts of value
systems could we even begin to consider?
>
> So in conclusion - - you have to qualify the term ethics or rationality by
> stating what's the goal of your system of ethics or rationality. The other
> person may then respond by asking why. If he does, that means there's an
> assumption that your system is part of a larger system (be it ethical, the
> laws of science, the universe, human nature, etc. etc.) and then you have to
> decide whether you wish to accept this presumption, and if so, justify your
> own system relative to that.
>
OK. I'm using a rationality which states that an action is rational to
the extent that it is the one most likely to result in improved
well-being.
> In conclusion, I really don't like the world ethics, because it is a loaded
> word with little meaning. I prefer to qualify it with "ethical continuity" -
> - thieves and liars too have there own system of ethics.. hence implying, if
> they abide by it, they are "ethical".
This is *not* true if we make the presumption I did at the beginning of
this discussion: that the right ethical theory is the rational one.
Within this context, I don't think that an ethical theory which condones
professional thieves is right; if it is rational for one man to be a
thief, then it is rational for anyone and everyone to be a thief. Since
it is not rational for everyone to become a thief (because everyone would
be worse off, as measured by themselves), this ethical theory is wrong,
and not rational.