> In the above paragraph you are judging one answer,
> "the right action is that which maximises extropy
> over all other possible actions" by another,
> Utilitarianism. Clearly this is not logical.
I should explain: this answer is not wrong simply because it disagrees
with utilitarianism. If that were the case, I would have no defense:
utilitarianism could be declared wrong by the same argument. (Though, if
utilitarianism is right, as I think it may be, then this argument would be
valid.)
However, I assert that most extropians would not agree with the
extreme ethical rule I described in my last post because it would require
actions which most extropians *already* consider wrong. Since we know
that most extropians do not believe killing, stealing and suicide to be a
good idea, we may use this to show that most extropians do not support a
strong consequentialist version of extropianism. I did not attempt to
show that they were right or wrong here; I was simply observing that they
do not support that particular moral rule.
> But you're judging Egoism by Utilitarianism, which
> does not seem to be a rational way of finding the
> correct answer. You cannot say one answer is wrong
> because it is not equal to another answer when you
> do not know the other answer to be correct.
You left off the second half of that paragraph, which was my *actual*
response to egoism. I believe I said something like this: the fact that
egoists will hurt others to benefit themselves is not logically
inconsistent, but it is not consistently generalizible: if everyone were
egoistic, we would actually be worse off. On some level, in order to be
egoistic, we must reject egoism. THIS is my real critique, which was more
or less the underpinning of Kant's argument: the correct answer to the
problem of ethics ought to work just as well (if not better!) when
everyone is acting ethically as when very few people are.
> I'm not sure this is obviously rational, I fail to
> see why saving lives is a rational act.
Saving lives is not, in itself, rational; unless you get great pleasure
from doing so. Saving your OWN life, however, is. This is fine for the
simple egoist; the simple egoist would steal to save his own life without
a second thought. The objectivist/libertarian, however, will do anything
to save her own life *except* steal, which leaves her in a bad place under
situations of sufficient urgency.
I argue that objectivism is wrong because it requires one to perform
irrational actions in order to do the right thing, which is contrary to
our earlier presumption that the right action is the rational one. With
this alone, we could not reject simple egoism; if we add generalization to
this rule, however, egoism fails the test, leaving us, IMO, with
utilitarianism.
> But doesn't Utilitarianism promote self-sacrifice?
> If your death saves lives (or just makes others
> happy), then your death is a good thing.
Possibly. However, I believe that people greatly exaggerate the number of
situations in which one might reasonably give one's life to save others;
there is usually a better option which doesn't involve suicide. At the
same time, I will not say that there are never situations under which it
would be good (at least according to utilitarianism) to give one's life to
save others. The only justificaiton I can give for this is, again, the
generalization principle: if no one would ever give any of their happiness
or risk their lives to save others, we would actually be more likely to
die/suffer unhappiness as a result; we must reject this policy in order to
pursue it.
Meanwhile, both of the other two answers require us to give our lives
under sufficently extreme circumstances. Even simple egoism may require
us to commit suicide if we will experience more pain than pleasure in the
remainder of our lives (though this is arguable). Utilitarianism provides
an answer which, IMO, agrees with both the generalization princple as well
as the conditions of rationality; that it requires suicide under extreme
conditions is not a sufficient counter: suicide may indeed BE rational
under very extreme circumstances.
You may choose to reject the generalization principle; if so, simple
egoism seems the most rational to me. However, if we DO accept the
generalization principle, I think that we must accept utilitarianism.