Dan Fabulich <firstname.lastname@example.org> wrote:
> Nick Bostrom wrote:
> >If one doesn't time discount future benefits, then it would seem that
> >the expected utility of *any* human action is infinite, since there
> >is always a finite probablility that it will lead to eternal
> Yes, but this is cancelled out by the finite probability that the action
> will lead to eternal damnation and suffering.
The problem is that infinite utilities cannot cancel out, since subtraction is not defined for infinite cardinals.
> >But here I see a problem for utilitarians... If other people's
> >happiness count for as much as your own, and if there are infinitely
> >many people (which is the case if the universe is open and has the
> >simplest topology), then the total amounts of pleasure and of pain
> >are both infinite, no matter what you do! Thus it seems that in an
> >open universe with the simplest topology, utilitarianism is no guide
> >to action. Similarly, for those people who think that an exact
> >replica of yourself is yourself - then *you* would in fact at this
> >very moment and in the future be experiencing an infinite amount of
> >pleasure and pain no matter what you do.
> >If we find the consequence that it doesn't matter what you do absurd,
> >then we seem to have a refutation of both utilitarianism and this
> >view of personal identity. But can that really be correct???
> This is only a problem if the universe turns out to be open and that an
> infinite number of intelligent beings are already out there. If this is
> not the case, then there's no problem to address.
No; one problem still arises because relative to our current knowledge there is a finite probability that the universe contains an infinite number of beings. The expected utility is thus infinite (or undefined) even if the universe in the end turns out to be finite. Any action you take will therefore (on this utilitarian view) have infinite utility, so you can't use the maximize-utility criterion to choose an action.
I suppose you could argue, however, that if you add the extra rule that you should maximize the likelihood that you get a positive infinite utility (even when the total expected utility is undefined) then you ought to look to see if there are any possibilities to which you assign a finite probability and in which you would have a finite chance of turning a finite-utility world into an infinite-utility world.
If there are no such possibilities, then (we could add a rule to the effect that) you should disregard all the possibilities where the utility is infinite/undefined and just act to maximize the expected utility over the remaining possibilities in the standard way.
> Even if this were the case, however, the infinities could still be
> compared. Remember, however, that comparing total utility is hard even
> when you've only got a finite number of people. But let's suppose that you
> had an infallible method for doing that. At that point, you could start
> making claims about one infinite set being a proper subset of the other
> infinite set, etc. And remembering that each action in such a universe
> would have literally an infinite consequence, it could even make sense to
> say that one action compared to another may even result in infinities with
> different cardinalities, in which the answer is clear: choose the bigger
> infinity of the two.
Well, if that were the case. But it seems that you could make similar claims for positive and negative utilities in uncountable ensembles.
http://www.hedweb.com/nickb email@example.com Department of Philosophy, Logic and Scientific Method London School of Economics