Re: The Future of Secrecy

From: Wei Dai (weidai@weidai.com)
Date: Wed Jun 18 2003 - 17:08:44 MDT

  • Next message: Rafal Smigrodzki: "RE: greatest threats to survival (was: why believe the truth?)"

    You're neglecting an important second reason for secrecy:
    protecting one's intellectual properties. Secrecy is not going to
    disappear because this second reason will remain in force.

    On Wed, Jun 18, 2003 at 03:28:51PM -0400, Robin Hanson wrote:
    > This seems to suggest a discrete split in our futures. Either agents,
    > or certain internal modules, are standardized enough to allow biases to
    > be checked, giving truly unbiased agents or modules, or biases are hard
    > to see but beliefs are not, so that agents self-deceive.

    It's not enough that agents be standardized, their mental architecture has
    to be designed so that biases can be checked while their intellectual
    properties are protected. This seems very difficult to me, if not
    impossible.

    > To see that there really is a demand for such self-deception, let's
    > work through an example. Let us say I know how much I really like my
    > girlfriend (x), and then I choose my new beliefs (y), under the
    > expectation that my girlfriend will then see those new beliefs, but not
    > any memory of this revision process. (Of course it wouldn't really like
    > this; analyzing this is a simple way to see the tradeoffs.)
    >
    > I face a tradeoff. The more confident I become I like her the worse my
    > future decisions will be (due to the difference y-x), but the more she
    > will be reassured of my loyalty (due to a high y). The higher my x,
    > the higher a y I'm willing to choose in making this tradeoff. So the
    > higher a y she sees, the higher an x she can infer. So this is all
    > really costly signaling.

    I don't think this is a good example of self-deception. If your girlfriend
    can infer what your original x was from y, then so can you, and the whole
    thing breaks down.

    It seems to me that self-deception should be thought of as an attempt to
    hide information, rather than to signal it. Here is my example. Suppose
    there is a job applicant for a software engineering position, who you
    think has a skill level uniformly distributed between 0 and 1. He has an
    incentive to appear to believe that his skill level is 1 no matter what it
    really is.

    Now to illustrate the conflict between checking biases and protecting
    intellectual properties. Suppose while reading the applicant's mind, you
    notice that one reason he believes that he has a very high skill level is
    that he believes he invented some very effective algorithms in the past.
    Now what kind of mental architecture will allow you to verify that the
    belief is unbiased without also letting you learn what the algorithms
    actually are?

    Or think about it this way. There has to be certain private vaults within
    your brain that are not open to inspection. They would contain things like
    your ATM password, or the fact that you think company X might be a really
    great investment opportunity. How could the inspector know that you have
    not hidden your real beliefs in these vaults?



    This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 17:19:39 MDT