Re: The Future of Secrecy

From: Robin Hanson (rhanson@gmu.edu)
Date: Wed Jun 18 2003 - 17:33:58 MDT

  • Next message: Rafal Smigrodzki: "RE: greatest threats to survival (was: why believe the truth?)"

    At 07:08 PM 6/18/2003 -0400, Wei Dai wrote:
    > > I face a tradeoff. The more confident I become I like her the worse my
    > > future decisions will be (due to the difference y-x), but the more she
    > > will be reassured of my loyalty (due to a high y). The higher my x,
    > > the higher a y I'm willing to choose in making this tradeoff. So the
    > > higher a y she sees, the higher an x she can infer. So this is all
    > > really costly signaling.
    >
    >I don't think this is a good example of self-deception. If your girlfriend
    >can infer what your original x was from y, then so can you, and the whole
    >thing breaks down.

    I agree that this is a problem with this model. I need to think more about
    what a good model would be. Your proposed model doesn't seem right either.

    >Or think about it this way. There has to be certain private vaults within
    >your brain that are not open to inspection. They would contain things like
    >your ATM password, or the fact that you think company X might be a really
    >great investment opportunity. How could the inspector know that you have
    >not hidden your real beliefs in these vaults?

    Eliezer S. Yudkowsky raised a similar issue:
    >Without access to the whole mind, how would you know that the small data
    >structure you saw was really that person's belief, and not a decoy?
    >Verifying that a belief would really be used, verifying that this is
    >really the data that would be used by the mind, seems scarcely less
    >difficult than detecting rationalizations or rationalization brainware.

    We are obviously getting pretty speculative here, trying to estimate which
    computations are more or less costly for our distant descendants. My
    intuition is that it shouldn't be that hard to verify what data structures
    are used for choosing ordinary actions, and it should be much harder to
    verify that the process of choosing those beliefs is unbiased. But my
    computation intuitions are not as good as they once were (having been away
    from it for a while); I'm interested to hear what other folks' intuitions
    say.

    Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
    Assistant Professor of Economics, George Mason University
    MSN 1D3, Carow Hall, Fairfax VA 22030-4444
    703-993-2326 FAX: 703-993-2323



    This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 17:45:23 MDT