Re: AI

From: Charles Hixson (charleshixsn@earthlink.net)
Date: Thu Apr 17 2003 - 09:54:22 MDT

  • Next message: Charles Hixson: "Re: 'Almost all of the media of the world in English' (was: Arab World Stunne..."

    Eliezer S. Yudkowsky wrote:

    > Emlyn O'regan wrote:
    >
    >>
    >> I think you'd be far more likely to get results by developing an
    >> environment
    >> that requires increasing levels of intelligence to survive, and
    >> putting the
    >> instances into that; survival (or maybe some form of reproduction) is
    >> then
    >> the basis of the fitness function.
    >>
    >> I'd say Eli would see this as a very dangerous approach to AI, but it
    >> might
    >> just get you through the early stages. I think you'd be unlikely to get
    >> general intelligence popping up in your experiments without a lot of
    >> prior
    >> warning; it seems unlikely that it'd be that easy.
    >
    >
    > You are correct that I see this as a very dangerous approach to AI.
    >
    > Supposing that the experiment doesn't just fizzle, and you arrive at a
    > baby-level intelligence rather than an unstoppable horror erupting
    > from your computer, what are you going to do with the baby? You don't
    > know how to make it Friendly. If you had that kind of theoretical
    > understanding you wouldn't be poking around.
    >
    > There is no "unlikely" here. There is only an unnecessary existential
    > risk.
    >
    > Just don't go there. If you don't know what you're doing, don't mess
    > around until you do. Don't try to guess whether the risk is large or
    > small; if you have to guess, that means you don't know enough to
    > guess. What you don't know can and will kill you. This is not a
    > matter of the precautionary principle. This is me, a specific person,
    > standing here and telling you: "You see this thing right here that
    > you don't understand? That's going to kill you." Perhaps you think I
    > am wrong. Perhaps I am wrong. Please do not go ahead until you
    > understand *that thing* well enough to say *exactly* why it won't kill
    > you.
    >
    > I repeat: Do not mess around. This is not a game.
    >
    The question might be "what is the baby trying to optomize?" Genetic
    programs don't just work at random, they've also got some selection
    mechanism. This selection mechanism is what will be establishing all of
    the "instincts" of the resultant program. I can imagine an approach
    like this that could lead to a Friendly AI, but I don't think I could
    design it.

    OTOH, some similar mechanism will probably be needed to allow the AI to
    adapt to unforseen situations. And unless you are very clever indeed,
    your AI won't have any knowledge of what it means to be someone else as
    an intrinsic part of it's thought processes. So the selection mechanism
    wouldn't be able to choose based on a consideration of whether or not it
    was harming other sentients. (It probably wouldn't even know what a
    sentient was.)

    So what you need to do is have the genetic algorithm be a sort of a
    "brain-storming" module, that throws up all sorts of approaches which
    are judged by other parts of the mind. And generally discarded. It
    would then evolve a set of "useful algorithms", and when faced with a
    novel problem consider the various approaches suggested (by the genetic
    blackbox), and then evaluate the projected results of applying the
    suggestion. Then it would decide which way to go. Here the "genetic
    blackbox" would be basically a way of generating possibly useful
    approaches. Each suggestion checked would be evaluated against a
    problem and scored (increment or decrement value of the approach). The
    lease valuable approaches would be discarded, a couple of new approaches
    would be generated. And the selection would be at random based on
    value. (You don't get distinct generations with this approach, but over
    time you get an equivalent effect).

    This could well be a very useful *component* of the seed.

    Another important part would be what are the nucleotides that the
    genetic program is combining. I would suggest that they include most of
    Knuth's fundamental algorithms, though designing an API that would allow
    the genetic program to combine them could be a challenge. Still, you
    have containers with access and deletion methods. You've got
    arithmetic. You've got set operations (expanded to work on all of the
    containers)... there's a lot to work with. But I think that until the
    program starts tinkering in an understanding way with it's own
    internals, the fundamental algorithms should probably be considered
    immutable. These are the components you build with, not the pieces you
    are designing. Program instructions need to be available as sequencing
    operations, etc., but they are at too low a level for even the genetic
    component to design. (Besides, we've generally got nearly optimum
    algorithms in these areas, so extensive improvement is probably
    impossible...better to put the creative energy into places that aren't
    optimized.)



    This archive was generated by hypermail 2.1.5 : Thu Apr 17 2003 - 10:01:28 MDT