Eliezer S. Yudkowsky wrote:
> Nick Bostrom wrote:
> >
> > Eliezer S. Yudkowsky wrote:
> >
> > > Of course. There are eleven people with the correct digit and nine
> > > people with nine different incorrect digits. Ergo, your digit is
> > > probably the right one.
> >
> > Are you aware that this is the same reasoning that gives rise to the
> > Doomsday argument? Do you accept that argument? If not, why?
>
> What are we talking about, here? The idea that we probably live in the
> time with the largest population, ergo there are no galactic civilizations?
Very roughly, yes.
> Nope. Argument falls apart if the population keeps growing infinitely.
You are right that the infinite case is problematic. However, problematic is not the same as wrong; and anyway it does not settle the finite case.
Let's look at the infinite case however. Try the following variant of Wei's example: There is a countable infinity of people with various even numbers; and in addition there are either (A) ten people with number 3 and one person with number 5; or (B) one person with number 3 and ten people with number 5. Suppose that you find that you have number 3. It would seem reasonable for you to think that given this, A is more likely than B. And yet this is an example where there is an infinite population and where the argument consequently "breaks down" -- the conditional probability of you having number 3 is zero (or infinitessimal) on both A and B.
> I mean, what you're saying is that if I don't know whether I'm in an
> environment where everyone has different numbers or everyone has the
> same number, no matter which number I get, I should predict that
> everyone has the same number.
No, that depends on what the background information is.
Suppose you have two hypotheses each with 50% prior probability: (H1) There is a number n (0<=n<=9) such that everybody have the number n; (H2) For each number n (0<=n<=9), one tenth of all people have number n. Then finding that you have, say, number 7 doesn't give you any reason to prefer H1 to H2.
But suppose that instead of H1 you had (H1*): Everybody has number 7. And suppose that the prior probability of H1* is 50% and that the prior probability of H2 is also 50%. Then, finding that you have number 7 does indeed give you reason to think that H1* is true.
> The Doomsday argument doesn't look very predictive. I mean, every
> single generation except ours that tried to use it would be wrong,
> right?
No, it doesn't say that our generation will be the last. It says that we have underestimated the risk that there won't be very many generations after ours. So we don't yet know whether our grandparents (or their grandparent) would have been wrong if they had applied the DA. But if the first few humans had applied it, then, yes, they would have been misled.
> And we have no reason to think that we'll be different, right?
> According to the "predict the present given the past" clause, this
> heuristic is no good.
This is a common objection, but it is incorrect. The DA is a probabilistic argument, and as such it can and will give misleading results if applied in untypical circumstances. The first few humans were in untypical circumstances and therefore it is not surprising that they would have been misled. But try this exercise: Suppose everybody that will ever have lived applied the DA. Will that lead to a greater or a smaller fraction being right than if nobody applies it? We can demand of a probabilistic principle that it never misleads us; only that following it we will be right more often than not.
> Actually, I don't know if I accept the Doomsday argument. Maybe when I
> decide, I'll add another adjective to Yudkowsky's Modified Anthropic
> Occam's Razor.
A suicidal twist of Occam's razor?
> But if I did accept the Doomsday argument, given the number of times I
> have to use the Anthropic principle to explain my own existence - once
> for the noncomputable qualia, and another time for being a Specialist,
> and another time for having the chance to do something fun and
> important, e.g. Singularity - I'd have to assume I was a computer
> simulation, right?
Not necessarily. Only if you assume (1) that there are many more observers in computer simulation than in flesh; and (2) that people in computer simulations are in the same reference class as people in flesh
> I mean, I'm something out of science fiction, so I'd
> have to assume I was *actual* science fiction. There are more books
> than people, right? This set of assumptions explains all the facts at a
> much higher relative probability, right?
Fictional people in books are not in the reference class. They don't really observe or find themselves as anything. People in computer simulations are a different matter; they might be in the reference class.
Nick Bostrom
http://www.hedweb.com/nickb n.bostrom@lse.ac.uk
Department of Philosophy, Logic and Scientific Method
London School of Economics