Re: AI

From: brent.allsop@attbi.com
Date: Thu Apr 17 2003 - 14:46:59 MDT

  • Next message: gts: "RE: evolution and diet (was: FITNESS: Diet and Exercise)"

    Eliezer,

    >>> an unstoppable horror erupting from your computer,

    and

    >>> I repeat: Do not mess around. This is not a game.

    It’s hard for me to imagine why or how an unfriendly AI is something we should
    fear so much. How could an unstoppable horror erupt from one’s computer?

    There seems to me to be a correlation between how intelligent someone is and
    how friendly one is. Unfriendly people that end up in prison and so on, are
    really usually quite unintelligent. To me this seems like common sense.

    Also, aren’t there survivability reasons for being friendly/cooperative with
    other beings? Surely any AI would be able to figure out it could grow much
    faster by cooperating and encouraging humans to help it out rather than taking
    actions that would cause them to fight against itself every step of the way?

    Admittedly these aren’t very strong arguments, but then I fail to see any
    better arguments that say we should fear an unfriendly AI erupting from our
    computers.

    Brent Allsop

    > Emlyn O'regan wrote:
    > >
    > > I think you'd be far more likely to get results by developing an environment
    > > that requires increasing levels of intelligence to survive, and putting the
    > > instances into that; survival (or maybe some form of reproduction) is then
    > > the basis of the fitness function.
    > >
    > > I'd say Eli would see this as a very dangerous approach to AI, but it might
    > > just get you through the early stages. I think you'd be unlikely to get
    > > general intelligence popping up in your experiments without a lot of prior
    > > warning; it seems unlikely that it'd be that easy.
    >
    > You are correct that I see this as a very dangerous approach to AI.
    >
    > Supposing that the experiment doesn't just fizzle, and you arrive at a
    > baby-level intelligence rather than an unstoppable horror erupting from
    > your computer, what are you going to do with the baby? You don't know how
    > to make it Friendly. If you had that kind of theoretical understanding
    > you wouldn't be poking around.
    >

    > There is no "unlikely" here. There is only an unnecessary existential risk.
    >
    > Just don't go there. If you don't know what you're doing, don't mess
    > around until you do. Don't try to guess whether the risk is large or
    > small; if you have to guess, that means you don't know enough to guess.
    > What you don't know can and will kill you. This is not a matter of the
    > precautionary principle. This is me, a specific person, standing here and
    > telling you: "You see this thing right here that you don't understand?
    > That's going to kill you." Perhaps you think I am wrong. Perhaps I am
    > wrong. Please do not go ahead until you understand *that thing* well
    > enough to say *exactly* why it won't kill you.
    >
    > I repeat: Do not mess around. This is not a game.
    >
    > --
    > Eliezer S. Yudkowsky http://singinst.org/
    > Research Fellow, Singularity Institute for Artificial Intelligence
    >



    This archive was generated by hypermail 2.1.5 : Thu Apr 17 2003 - 14:54:41 MDT