Nick Bostrom writes:
>> >Let me put it like this. The amnesia heuristic sets a lower bound for
>> >what should be included in the reference class.
>> >For example, it clearly makes sense to say that you might not have
>> >known (indeed you may not know) the exact hour of your birth. If you
>> >didn't know that, then you would use some probability distribution
>> >over possible birth hours compatible with what else you know. If you
>> >conditionalize this distribution on your exact birth hour, you should
>> >get back the distribution you held before you forgot the birth hour.
>> >If you don't get back the original distribution then that indicates
>> >that you had forgot to take account of some effect. The doomsday
>> >argument claims that there is such an effect that you have neglected
>> >to take into account.
>> Well if you're going to rest the whole DA on a mismatch between these
>> two calculations, you have do a lot better convincing me there *is*
>> a mismatch. I thought Dieks did a reasonable job of calculating what
>> the person with amnesia should calculate according to the usual view.
>Dieks calculation rests on the self-indication axiom: that finding
>that you are an observer indicates that there are many
>observers. Only yourself can answer whether that is what you
>believed before you heard about the DA. Speaking for myself, I
>didn't believe that, ...
I don't see what timing has to do with anything. We've had a standard approach to modeling things like doom for a long time. What is new is asking what this standard approach implies for a creature with amnesia. Perhaps you find counterintuitive the result that amnesiacs should be especially optimistic about humanity as a whole surprising. But it is DA advocates who are using this surprise to argue for a change in the standard approach, a change which eliminates this surprise at the expense of making other surprises elsewhere.
>believed that either, since it was not generally accepted that by
>just sitting back in your armchair you could decide that the universe
>is infinite with probability one, unless it is impossible that it is
>infinite. Most people did, and presumably still do, believe that
>there is some finite probability that the universe is infinite and a
>finite probability that it is finite. If that is what you believe,
>then your prior was certainly not the one Dieks presupposes.
Can you *prove* that under the standard approach, one cannot coherently believe with probability in (0,1) that the universe has a finite number of humans? I'd guess that doing so would require a rather subtle analysis, as things get messy when probabilities and infinities collide.
>> Let me repeat as forcefully as I can: There are standard approaches
>> to formally modeling our propects for doom, and which don't imply doom
>> soon. To disagree with them, you must dispute some aspect of those
>> models, either their state spaces, priors, or information partitions.
>I don't know what "standard approaches" you are referring to. If
>you mean the one you outlined, I have already said that I think your
>specification of the state space is incoherent, since it presupposes
>that in a world with just one apple and one pear, there is a fact of
>the matter as to weather you are the apple and I'm the pear, or vice
>versa. I simply can't make any sense of that.
No that isn't the standard approach. That is my attempt to create a hybrid of the standard approach and what I understand to be the DA approach.
I've tried to explain this at length in my paper: http://hanson.berkeley.edu/nodoom.html
A standard approach to calculating the chance of doom would be to assume that human population grows exponentially, that doom happens when the population reaches 10^d, and then to choose a prior over d based on our understanding of the sorts of processes that might cause doom. A typical choice might be to have d be uniformly distributed between 0 and 20. One then interprets our information that today's population is about 10^10 as just telling us that d is at least ten, and so obtains a posterior that d is uniformly distributed between 10 and 20. This implies a median future growth factor of 10^5 before doom, which is far from "doom soon."
firstname.lastname@example.org http://hanson.berkeley.edu/ RWJF Health Policy Scholar, Sch. of Public Health 510-643-1884 140 Warren Hall, UC Berkeley, CA 94720-7360 FAX: 510-643-8614