Robin Hanson wrote:
> 1) Even when it is valid, the argument suggests "change," and
> not necessarily "doom". Creatures like us can no longer exist
> because their descendants are different, but not destroyed.
Whether the "metemorphosis, not doom"-interpretation is even possible depends on what is the correct solution to the reference class problem (see below). My present suspicion is that this interpretation is indeed a viable rival to the "doom"-interpretation, but it is by no means obvious. And even if we buy this "metamorphosis" scenario, it poses problems for those who hope to personally survive into the posthuman world.
> 2) Our concept of "like us" can be biased by where we are
> in time. No fair picking the last big intelligence change, and
> then defining "like us" to be anything at least as smart as we
> are. Better to exclude smarter creatures as well as dumber.
It comes down to this: From what class should you regard yourself as being a random sample? This is the reference class problem. We need some better way of solving this that appealing to what seems fair. Let me try to summarize my current incomplete ideas on this issue:
The best heuristic I know of is the "Amnesia chamber" thought experiment. Imagine you have forgotten certain facts about yourself and you are asked assign probabilities to various statements. We can readily imagine, I think, that this thought experiment can reveal your subjective probability of you being born in March conditional on an information set which doesn't include any direct knowledge of your birthday. This probability would presumably be about 1/12 (or slighly greater since March has somewhat above average birth rates, I think).
We can extract many other subjective conditional probability estimates through this method. But for some conditional probabilities it won't work. For example:
C= P(You are a human | You know only you are a material object)
The problem is that if we tried to induce an amnesia so deep that you didn't remember anything other than that you are a material object, then we would presumably have caused you to cease to exist as a rational being; in that state you would not come up with any probability estimates at all.
This is not a practical problem of what advanced neuroscientists can or cannot do. It is a conceptual problem. If we modify you into a state where all you remember is that you are a material object, then it's just not you any longer. What the entity in that state would say is irrelevant, because it wouldn't reflect *your* conditional probabilities.
It therefore seems that it doesn't make sense to ascribe this extreme kind of subjective conditional probabilities to people. Your conditional probability C is not defined.
This, I think, means that you cannot regard yourself as a random sample from the set of all material objects. How far can we push this methodology?
> 3) It's actually hard to formalize the argument so that the
No they don't show that. They show that it holds for a few special
cases. But if you look at the formula at the bottom of page 4 in
> implications are big.
> For example, Oliver & Korb show that accepting one's birth rank
> as a random draw from some total population, with a uniform prior
> over that total population, an observation of a low rank changes
> the expected value by less than a factor of ten.
No they don't show that. They show that it holds for a few special cases. But if you look at the formula at the bottom of page 4 intheir tech report, and you hold your own birth rank r constant, you see that the expected value E(N|R=r) ~ U/log U when U is large. [U is the upper bound on the number of people; N is a variable ranging over possible population sizes; R is a variable ranging over possible birth ranks.] But E(N)= U/2. So the quotient E(N) / E(N|R=r) ~ (U*log U) / (2*U) ~ log U, which becomes arbitrarily large as U goes to infinity. So you can get arbitrarily changes even when you have a uniform prior.
In general, the DA will have extra bite if one thinks that we will soon colonize the galaxy and thereafter be essentially invulnerable to extinction (as Mike Price argued in a fascinating discussion I had with him the other night under a starry sky).
> As another example, Leslie admits in his "shooting room" example
> that if the probability of "doom" is constant with time independent
> of population size, the doomsday argument fails.
Leslie thinks the DA works in the shooting room in the deterministic version. (see p. 254). He doesn't think it works in the radically indeterministic case, but then again he doesn't believe the DA works in other contexts either if radical indeterminism holds. It has nothing to do whether the probability of doom is constant over time.
> Finally, Nick shows how the argument gets weakened as one allows
> for alien creatures "like us" who won't get hurt by our local doom.
Yes, but the fact that we are not posthumans could be taken to indicate that there will not have been a great many alien posthumans. So either there are only a few alien species, in which case the weakening is slight; or there are many alien species, but then almost of them go extinct before they become posthumans and why should be expect to be different?
> 4) Nick explains well how the doomsday argument assumes that
> one was guarenteed to show up as a creature "like us" at sometime
> in our universe, regardless of how many such creatures this
> universe produces. If one instead assumes that the probability of
> finding oneself in a universe is proportional to population of that
> universe, the doomsday argument evaporates. This later assumption
> seems much more reasonable to me, and to Kopf, Krtous, & Page.
> I buy it even if the different universes are only "possible"
> rather than coexisting in some way.
Yes, this was the objection that I once thought refuted the DA. But on second thought there turned out to be big problems with it: the no-coincidence objection, and the infinity-objection. I outline these in my Investigations paper. How do you suggest we deal with these difficulties?