Re: The Future of Secrecy

From: Wei Dai (weidai@weidai.com)
Date: Thu Jun 19 2003 - 15:49:53 MDT

  • Next message: Steve Davies: "Re: The Iraq war was extropian? Re: [Iraq] The real reason for the war"

    On Wed, Jun 18, 2003 at 09:14:41PM -0700, Hal Finney wrote:
    > In the book The Golden Age, the truth-detection was done by the
    > Sophotechs, the super-AIs millions of times smarter than people.
    > These AIs were also trustworthy ("friendly"?), so if they learned
    > someone's secrets it was OK. It was a sort of literal deus ex machina
    > solution to the problem.

    So suppose I was negotiating with you to buy your used car, and I
    want to verify that you're not hiding any problems with the car from me,
    I'd ask a super-AI to read your mind? Wouldn't it be a lot easier just to
    have the super-AI build a new car for me? Or better yet, have the AI
    construct a nanotech body for me that can travel faster than any car. :)

    The point is, if we need super-AIs to detect lying or self-deception, then
    there can be no economics argument for secrets going away.

    > As far as the issue of keeping secrets that Wei raised, there might be a
    > few possible solutions. One would be to create an AI which would read
    > your mind, report on your truthfulness, and then self-destruct. Or,
    > given such an AI program, in principle you could use a crypto algorithm,
    > a zero knowledge proof, to show that your brain state would satisfy the
    > AI program, without actually running it.

    I don't see how you could do this with zero knowledge proof techniques. As
    far as I know, in order to do a zero knowledge proof of anything you have
    to construct a conventional proof of it first. Zero knowledge only lets
    you convince someone else that a conventional proof exists without having
    to show it to him. Maybe you mean generalized secure multiparty
    computation instead? But in that case the AI is still run, it's just that
    the execution is encrypted and shared between several parties.

    > Another possibility would be to standardize on mental architectures which
    > have a fairly strict separation between the motivational structure and
    > certain kinds of factual knowledge. Then you could expose the one part
    > but keep the other secret, and the mind reader could be confident that
    > the secret data would not have a significant impact. However I'm not
    > sure it makes sense to keep secrets but somehow to have them be unable
    > to affect your motivations.

    It seems to me that whatever economic pressure exists towards
    standardizing on verifiable mental architectures will apply much more
    towards standardizing on motivations. A group of agents with the same
    mental architecture and different motivations faces huge disadvantages
    when competing with a group of agents with a variety of mental
    architectures and the same motivations. The latter group can optimize
    their mental architectures for efficiency and specialization, rather than
    verifiability.



    This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 15:58:57 MDT