Robin Hanson wrote:
> Flipping the argument around, anyone who doesn't choose their actions
> soley on how it influences the chance for eternal bliss *does*
> discount future benefits. Thus the vast majority of people do
> discount the future.
Yes, that was what I suggested too. The problem that I was concerned with, however, was not for decision theory as such, but for a certain ethical theory, utilitarianism (and possibly for one theory of personal identity as well). It seems wrong to say that in an infinite universe there would be nothing morally wrong with torturing innocent people for a little fun (especially considering that our universe probably really is infinite, according to recent data). Yet, on the standard version of utilitarianism, an action is good if and only if it maximizes total expected utility (where every equally sentient being counts for equally much). So such torture would be a good action on that theory, since it maximizes utility (like all other actions). So this version of utilitarianism gives an unacceptable result. That problem is that when you decide to give everybody's well-being the same weight, then discount factors won't remove the infinity.
One remedy that one might attempt (which I mentioned in my reply to Dan) would be to extend standard decision theory in the following way. In order to determine whether an action A is right: