Re: The Future of Secrecy

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Jun 19 2003 - 18:56:06 MDT

  • Next message: Lee Corbin: "RE: META: Dishonest debate (was "cluster bombs")"

    Robin Hanson wrote:
    >
    >> Without access to the whole mind, how would you know that the small
    >> data structure you saw was really that person's belief, and not a
    >> decoy? Verifying that a belief would really be used, verifying that
    >> this is really the data that would be used by the mind, seems scarcely
    >> less difficult than detecting rationalizations or rationalization
    >> brainware.
    >
    > We are obviously getting pretty speculative here, trying to estimate
    > which computations are more or less costly for our distant descendants.
    > My intuition is that it shouldn't be that hard to verify what data
    > structures are used for choosing ordinary actions, and it should be much
    > harder to verify that the process of choosing those beliefs is
    > unbiased. But my computation intuitions are not as good as they once
    > were (having been away from it for a while); I'm interested to hear what
    > other folks' intuitions say.

    If you're a human who has somehow been granted access to another human's
    mind by direct empathy or telepathy (how?) then it might be easier to
    "feel" a deliberate deception than a rationalization. Perhaps this is
    what happens if you do a broadband computer-mediated telepathy connection
    by cross-wiring ten million neurons of prefrontal sensory integration
    cortex and seeing if the two brains learn to talk to each other; perhaps
    this is what happens if you have a perfect lie detector via training a
    neural network on fMRI data; perhaps this is what happens if you decode
    the auditory cortex and get a printout of the verbal stream of consciousness.

    But if you are a Friendly AI looking at an entire human mind, deliberate
    deception and rationalization should be about equally easy to detect, and
    both should be blatant in human minds or near-term derivatives thereof.
    Yes, internally some mechanisms of human rationalization flash past at
    perceptual speeds and are structurally subtle and hard to see... but from
    the perspective of an experienced connoisseur of minds-in-general looking
    at the source code, I expect human rationalization is around as subtle as
    a lead pipe to the back of the head. Why would you expect differently?
    Evolution is not trying to hide self-deception from FAIs.

    I can see a loving couple wanting to execute a contract allowing for
    two-way FAI-assisted empathy, and yes, that would more easily catch lies
    than *past* rationalization. Immediate, present-day rationalization has a
    distinct subjective feel to it, though it might be subtler for humans to
    learn detect than outright lies, and some forms of rationalization are so
    subtle that mere empathy would not be enough to catch it. But what is the
    dreadful consequence of this? Shallow people won't fall in love with
    empathic rationalists? It probably would have been a doomed match anyway.

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Thu Jun 19 2003 - 19:05:17 MDT