Re: AI

From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Thu Apr 17 2003 - 04:31:00 MDT

  • Next message: Amara Graps: "'Almost all of the media of the world in English' (was: Arab World Stunned By Baghdads Fall)"

    Emlyn O'regan wrote:
    >
    > I think you'd be far more likely to get results by developing an environment
    > that requires increasing levels of intelligence to survive, and putting the
    > instances into that; survival (or maybe some form of reproduction) is then
    > the basis of the fitness function.
    >
    > I'd say Eli would see this as a very dangerous approach to AI, but it might
    > just get you through the early stages. I think you'd be unlikely to get
    > general intelligence popping up in your experiments without a lot of prior
    > warning; it seems unlikely that it'd be that easy.

    You are correct that I see this as a very dangerous approach to AI.

    Supposing that the experiment doesn't just fizzle, and you arrive at a
    baby-level intelligence rather than an unstoppable horror erupting from
    your computer, what are you going to do with the baby? You don't know how
    to make it Friendly. If you had that kind of theoretical understanding
    you wouldn't be poking around.

    There is no "unlikely" here. There is only an unnecessary existential risk.

    Just don't go there. If you don't know what you're doing, don't mess
    around until you do. Don't try to guess whether the risk is large or
    small; if you have to guess, that means you don't know enough to guess.
    What you don't know can and will kill you. This is not a matter of the
    precautionary principle. This is me, a specific person, standing here and
    telling you: "You see this thing right here that you don't understand?
    That's going to kill you." Perhaps you think I am wrong. Perhaps I am
    wrong. Please do not go ahead until you understand *that thing* well
    enough to say *exactly* why it won't kill you.

    I repeat: Do not mess around. This is not a game.

    -- 
    Eliezer S. Yudkowsky                          http://singinst.org/
    Research Fellow, Singularity Institute for Artificial Intelligence
    


    This archive was generated by hypermail 2.1.5 : Thu Apr 17 2003 - 04:40:58 MDT