Re: Parallel Universes

From: Wei Dai (weidai@weidai.com)
Date: Tue Feb 11 2003 - 20:03:50 MST

  • Next message: Rafal Smigrodzki: "RE: Parallel Universes"

    On Mon, Feb 10, 2003 at 06:45:04PM -0500, Eliezer S. Yudkowsky wrote:
    > The problem is when the existence of all possible universes and subjective
    > experiences interferes with your ability to make choices dependending on
    > which futures come into existence. But as long as you can define a
    > measure of relative frequency on those subjective experiences, the
    > relative frequency of happy subjective experiences relative to unhappy
    > ones remains strongly dependent on our choices. The probabilities and
    > goal dynamics seem to stay the same for an egoist-temporalist trying to
    > control the subjective future probabilities of their own timeline, and an
    > altruist-Platonist who believes that relative frequencies within a
    > timeless reality are correlated with the output of the deterministic,
    > multiply instantiated, Platonic computational processes that we experience
    > as decisionmaking.

    I find it interesting that probabilities must work differently for these
    two types of people. Consider a thought experiment where you're shown a
    computer printout which is supposed to be the millionth bit in the binary
    expansion of pi. However the computer is somewhat faulty and in 1% of the
    branches of the multiverse it gives an incorrect answer. Now you're asked
    to guess the true value of the millionth bit of pi (let's call this X). If
    you guess correctly you'll be rewarded, otherwise punished, with about
    equal severity. Suppose you see the number 0 on your printout, and that
    you don't have any other information about what X is, and you can't
    compute it in your head. Obviously both the egoist-temporalist and the
    altruist-Platonist would choose to guess 0, but their reasoning process
    would be different.

    Before seeing the printout, both types believe that the probability of X
    being 0 is .5. After seeing the printout, the egoist-temporalist would
    apply Bayes's rule and think that the probability of X being 0 is .99.
    He reasons that guessing 0 lead to a .99 probability of reward and .01
    probability of punishment. His expected utility of choosing 0 is
    .99*U(reward) + .01*U(punishment).

    An altruist-Platonist would instead continue to believe that the
    probability of X being 0 is .5. He reasons that if X is 0, then his
    current measure is .99m (where m is the measure of himself before seeing
    the printout), and if X is 1, then his current measure is .01m. So
    guessing 0 would lead to a .5 probability of .99m people being rewarded
    and .5 probability of .01m people being punished. His expected utility of
    choosing 0 is .5*U(.99m people rewarded) + .5*U(.01m people punished).
    Note that if he did apply Bayes's rule, then his expected utility would
    instead become .99*U(.99m people rewarded) + .01*U(.01m people punished)
    which would weight the reward too heavily. It doesn't matter in this case
    but would matter in other situations.

    What's weird and interesting is that for the altruist-Platonist, emperical
    data don't change his subjective probabilities of mathematical statements.
    They only tell him what his current measure is and what the measures of
    people can can affect are, under various mathematical assumptions whose
    probabilities are determined only be internal calculations. This seems to
    be a very alien way of reasoning, which I'm not sure any humans can
    practice in daily life. But which type would the Friendly AI be, I wonder?



    This archive was generated by hypermail 2.1.5 : Tue Feb 11 2003 - 20:07:14 MST