RE: AI

From: Emlyn O'regan (oregan.emlyn@healthsolve.com.au)
Date: Wed Apr 16 2003 - 23:45:13 MDT

  • Next message: gts: "RE: evolution and diet (was: FITNESS: Diet and Exercise)"

    > --- Keith Elis <hagbard@ix.netcom.com> wrote:
    > > Has a software architecture been
    > > described in enough
    > > detail to settle that a program can successfully
    > > modify itself?
    >
    > Actually, I've written some. Rather crude, and only
    > allowed to run for a few generations before the
    > solution was good enough to use, but it was
    > self-modifying code.
    >
    > > How in
    > > the world does it work?
    >
    > The approach I used was some basic genetic algorithms,
    > examples of which you should be able to find by
    > googling for that, with a fixed judgement package (in
    > my case, comparing to known correct answers over a
    > certain number of test cases). In theory, if one
    > could
    > write a judge to test for "intelligence" (which
    > requires defining such), allowing the judge to form
    > its own questions consistent with some rules so the
    > algorithms are more likely to adapt for the rules than
    > just for the judge, one could run an approach like
    > that
    > over many many generations. The difficulties there
    > lie
    > in writing a good judge (you can't do manual judging,
    > since you need short generation times), and the large
    > number of generations - and thus runtime - needed
    > (even
    > with 1 second generation times, if it takes a billion
    > generations...). Neither one is a problem to be taken
    > lightly.

    Intuitively, I think you need general intelligence to be able to judge
    general intelligence when you see it (even then, it's probably pretty hit &
    miss).

    I think you'd be far more likely to get results by developing an environment
    that requires increasing levels of intelligence to survive, and putting the
    instances into that; survival (or maybe some form of reproduction) is then
    the basis of the fitness function.

    I'd say Eli would see this as a very dangerous approach to AI, but it might
    just get you through the early stages. I think you'd be unlikely to get
    general intelligence popping up in your experiments without a lot of prior
    warning; it seems unlikely that it'd be that easy.

    So, you might very profitably use a pure evolutionary approach until you get
    something that seems to be able to function at something like the level of a
    human toddler. After that, the path is unclear; you have a bunch of evolved
    (ie: impenetrable) code which you daren't evolve much further. Ideas?

    Emlyn



    This archive was generated by hypermail 2.1.5 : Wed Apr 16 2003 - 23:55:56 MDT