Re: The Future of Secrecy

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Wed Jun 18 2003 - 15:57:46 MDT

  • Next message: Rafal Smigrodzki: "RE: The Future of Secrecy"

    Robin Hanson wrote:
    >>
    >> Accepting the scenario, for purposes of discussion... why would
    >> rationalization be any harder to see than an outright lie, under
    >> direct inspection? Rationalization brainware would be visible. The
    >> decision to rationalize would be visible. Anomalous weights of
    >> evidence in remembered belief-support networks could be detected, with
    >> required expenditure of scrutinizing computing power decreasing with
    >> the amount of the shift. It's possible there would be no advantage for
    >> rationalization over lying; rationalization might even be bigger and
    >> more blatant from a computational standpoint.
    >
    > As I argued in my response to Hal Finney, the process that leads to a
    > belief seems much larger than the resulting belief itself. To verify a
    > belief you may need to just look up the right data structure. To verify
    > that a mind that produces beliefs is unbiased required you to look at
    > much more than a small data structure. This diffuse nature of belief
    > production is in fact what makes it hard for us to see our own
    > self-deception.

    Without access to the whole mind, how would you know that the small data
    structure you saw was really that person's belief, and not a decoy?
    Verifying that a belief would really be used, verifying that this is
    really the data that would be used by the mind, seems scarcely less
    difficult than detecting rationalizations or rationalization brainware.

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Wed Jun 18 2003 - 16:08:09 MDT