RE: Parallel Universes

From: Lee Corbin (lcorbin@tsoft.com)
Date: Tue Feb 11 2003 - 23:48:49 MST

  • Next message: Wei Dai: "Re: Parallel Universes"

    Eliezer writes

    > Wei Dai wrote:
    >
    > > I find it interesting that probabilities must work differently for these
    > > two types of people. Consider a thought experiment where you're shown a
    > > computer printout which is supposed to be the millionth bit in the binary
    > > expansion of pi. However the computer is somewhat faulty and in 1% of the
    > > branches of the multiverse it gives an incorrect answer. Now you're asked
    > > to guess the true value of the millionth bit of pi (let's call this X). If
    > > you guess correctly you'll be rewarded, otherwise punished, with about
    > > equal severity. Suppose you see the number 0 on your printout, and that
    > > you don't have any other information about what X is, and you can't
    > > compute it in your head. Obviously both the egoist-temporalist and the
    > > altruist-Platonist would choose to guess 0, but their reasoning process
    > > would be different.
    > >
    > > Before seeing the printout, both types believe that the probability of X
    > > being 0 is .5. After seeing the printout, the egoist-temporalist would
    > > apply Bayes's rule and think that the probability of X being 0 is .99.
    > > He reasons that guessing 0 lead to a .99 probability of reward and .01
    > > probability of punishment. His expected utility of choosing 0 is
    > > .99*U(reward) + .01*U(punishment).
    > >
    > > An altruist-Platonist would instead continue to believe that the
    > > probability of X being 0 is .5.
    >
    > No.

    I agree, on the grounds that there is no longer any basis for the .5
    estimate. I don't believe that in any of the Tegmark universes pi
    can assume any other value than it does. Therefore, the altruist-
    Platonist would believe that p(X=0) is .99.
     
    > > Note that if he did apply Bayes's rule, then his expected utility would
    > > instead become .99*U(.99m people rewarded) + .01*U(.01m people punished)
    > > which would weight the reward too heavily. It doesn't matter in this case
    > > but would matter in other situations.

    What do you mean, "it doesn't matter in this case"? On one reading
    of that I must ask for you to supply a case, if possible, where it
    does matter.

    > If he chooses to apply Bayes's rule, then his expected
    > *global* utility [of choosing 0] is p(1)*u(.99m rewarded)
    > + p(1)*u(.01m punished),

    Yes. And that (as you write later) is merely

    p(1) * (the event that .99 of 'em get zapped and .01 of 'em get blissed)

    which is not at all surprising. And is it not the same as Wei Dai
    had written

    > > .99*U(reward) + .01*U(punishment).

    in easier-to-understand language?

    Lee

    > Roughly speaking, if a Bayesian altruist-Platonist sees X=0, his
    > expected global utility is:

    > p(.99)*u(.99m observers see 0 => .99m observers choose 0 => .99m observers
    > are rewarded && .01m observers see 1 => .01m observers choose 1 => .01m
    > observers are punished)
    > +
    > p(.01)*u(.99m observers see 1 => .99m observers choose 1 => .99m observers
    > are rewarded && .01m observers see 0 => .01m observers choose 0 => .01m
    > observers are punished)
    > =
    > p(1)*u(.99m observers are rewarded && .01m observers are punished)
    > =
    > p(1)*u(.99m observers rewarded) + p(1)*u(.01m observers are punished)



    This archive was generated by hypermail 2.1.5 : Tue Feb 11 2003 - 23:45:31 MST