Re: The Future of Secrecy

From: Robin Hanson (rhanson@gmu.edu)
Date: Thu Jun 19 2003 - 20:14:55 MDT

  • Next message: MaxPlumm@aol.com: "(Vietnam, perspective) A wider net than even the Patriot Acts!"

    At 08:56 PM 6/19/2003 -0400, Eliezer S. Yudkowsky wrote:
    >>My intuition is that it shouldn't be that hard to verify what data
    >>structures are used for choosing ordinary actions, and it should be much
    >>harder to verify that the process of choosing those beliefs is unbiased.
    >
    >... if you are a Friendly AI looking at an entire human mind, deliberate
    >deception and rationalization should be about equally easy to detect, and
    >both should be blatant in human minds or near-term derivatives thereof. ...

    As Wei Dai said, the economic rationale for mind transparency goes away if
    it requires a far more expensive mind to see into a cheaper mind. So if
    these AIs are the most sophisticated things around, then the question I'm
    interested in is whether economic pressures encourage them to make
    themselves transparent to each other, and what mental constructs that includes.

    Robin Hanson rhanson@gmu.edu http://hanson.gmu.edu
    Assistant Professor of Economics, George Mason University
    MSN 1D3, Carow Hall, Fairfax VA 22030-4444
    703-993-2326 FAX: 703-993-2323



    This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 20:24:16 MDT