Re: Market failure to sufficently weigh the future

From: Nick Bostrom (nick@nickbostrom.com)
Date: Tue Oct 31 2000 - 17:21:35 MST


Robin Hanson wrote:

>Hal Finney wrote:
>>
>>evolution, ... The authors' argument does not seem to address the
>>possibility that discounting is rational and appropriate. They view
>>the effect as purely psychological or even philosophical. Discounting
>>the future can be modelled as a fully rational process.
>
>It is standard practice to describe the discount rate in simple
>examples where there is no uncertainty, but I'm very sure the authors
>are well aware that real decisions have uncertainty, and that they
>intended their discount rate to be used in the standard expected
>utility framework when there is uncertainty.

A few possibly clarifying points:
1. As Robin points out above, we factor out uncertainty, growth etc. and
focus on the remaining tendency that people have to discount the future.
2. What we are looking at is what a benevolent social planner who is a
preference utilitarian would do. I.e. the task of the planner is to
maximize total preference-satisfaction.
3. Preference-satisfaction is distinct from pleasure. On the preference
utilitarian's view, it is good that a preference be satisfied even if the
person who has the preference never finds out about it and thus never
experiences any pleasure as a result.
4. Preferences of future and past people count for as much as current
preferences of existing people.
5. Although the authors end up drawing policy conclusions, we can set that
aside, temporarily at least. The step from what the ideal social planner
would choose to recommending certain kinds of government intervention
requires extra arguments that take into account practicalities of
implementation.

Now, the authors argue that we discount the future but that we don't have a
corresponding inverse tendency to "over-appreciate" the past. Although I
may have some preference that my past was happy, I tend to prefer to have
current or near-future felicity rather than felicity in the past. This
seems a plausible claim to me. If one argues that we haven't evolved to
have any strong preferences about the past, this would only strengthen the
authors' claim.

 From this, it is then shown that if each observer-moment makes choices
according to its own preferences, then less total preference-satisfaction
results than if a benevolent social planner could subsidize resource
transfer to later observer-moments (saving).

Another consequence is that if we consider an individual who lives for 10
time units, and has the same kind of preferences throughout his life and
has identical past and future discount rates, then it is more important
that the preferences he had in his mid-life be satisfied than that those he
had at the beginning and end of his life are. For by satisfying a
preference he had during t=5, you satisfy both that preference and the
preferences he had at t=4 that his preference at t=5 would be satisfied,
and his preference at t=6 that his preference at t=5 be satisfied, etc. By
contrast, satisfying a preference at t=10 only satisfies preference at t=10
and t=9 and preferences at earlier times which are more heavily discounted
since they are more distant from t=10.

Here are two further ruminations that I haven't thought through carefully:

A. Yet another consequence is that if you want the social planner to give
you what you want, you may create lots of individuals who have as their
main preference in life that your preferences be satisfied. (This is
somewhat analogous to paying people to pray for your soul after you're dead
in the hope that a preference-satisfying God will choose not to ignore so
many peoples' wishes.) If this seems counterintuitive, one should bear in
mind that maybe it's not ethical to create individuals for that purpose.
But once they are there, their preferences count. Yet suppose you are
ethically entitled to use your capital for such purposes. Then one thing
you could do if you were very rich is to create a sufficient number of
individuals whose only wish were that your wishes should be granted. The
preferences of all these wishers could then dominate counter-wishes by
other people, so that you would get the right e.g. to have somebody else's
spaceship since that would satisfy so many more preferences. It seems
wasteful to require the rich guy to actually create all the wishers, so
maybe he has a right to take your spaceship simply in virtue of being so
rich and having the potential to create the wishers. That seems rather
absurd, so we should conclude that you don't have a generic ethical right
to use your capital to create new individuals with whatever preferences you
fancy.

B. Presumably you don't have to be actively aware of a desire for X in
order to have a preference for X (otherwise we'd have much fewer
preferences than economists ascribe to us. But do we also have preferences
while we sleep (dreamlessly)? While cryonically suspended? Or in the case
of an upload, during the time when you are not running? That would seem to
have the counterintuitive consequence that by just suspending your mind for
a sufficiently long period, you would get an ethical entitlement to
whatever you want.

Dr. Nick Bostrom
Department of Philosophy
Yale University
Homepage: http://www.nickbostrom.com



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:50:19 MDT