Re: SITE: Coding a Transhuman AI 2.0a

From: Matt Gingell (mjg223@is7.nyu.edu)
Date: Fri May 19 2000 - 22:57:15 MDT


A gene collective with the tag 'Dan Fabulich' expressed themselves
thusly:

> You're right, they don't spend years figuring out stoves. That's
> because they already have lots and lots and LOTS of information
> about the world around them before they ever get out of the womb.
> This information is stored in the genes and is visible as what we
> call "instinctual" behavior. THIS information took millions and
> millions of years to develop, and without it, our children WOULD be
> spending years (actually, years is an optimistic estimate, IMO)
> figuring out a hot stove.

Sure - I'm completely sympathetic to this point of view. Would you
agree though that we shouldn't aim to hand code a machine that does
anything a baby can't?

There is a wider, more important, question here though: What is a baby
born with? What have millions of years of evolution invented? Do we
have, as I would like to think, a superbly elegant, distributed
learning and representation forming machine, a general purpose pattern
extraction engine; or do we have a bunch of rules and hardwired
concepts with an afterthought of theorem prover on top? Has our
Darwinian history provided us with a database of rules and
combinatorics, or has it stumbled across a universal blank slate
automaton - itself more fit than any single, static apparatus?

Surely it's a bit of both - one can render the nature vs. nurture
dialectic, or any other, bland by saying it's a mix. The question I'm
posing though is as follows: Is intelligence something special, is it
something beyond a vast store of facts and rules? Is instinctive
knowledge necessary, or does instinct simply optimize something deeper
and more interesting? Can we construct a general definition of what
intelligence is, independent of it's utility in some specific
environment - and if we can is it possible to develop an instance of
that definition which would function 'intelligently' no matter what
universe we drop it into? My answer is, obviously, yes, and finding
that abstraction is the proper goal of AI and cognitive science
research. Think, for instance, about the concept 'natural number.' What does it
take to extract that from the world? Surely it's universal to anything we'd
recognize as intelligent - but where does it come from, and by what process?

> Experiments with babies show that babies as early as a few months
> are surprised to see objects vanish into thin air, come to a full
> stop without being in contact with another object, and begin moving
> without a force to propel them. Babies have a very compicated idea
> of what elements in their visual field constitute objects and how
> those objects will generally behave.

Moravec, if I'm remembering correctly, estimates the raw computing
resources of the human brain at something like 10 teraflops. Give me
'a few months' of time on a machine that big, and I'm confident I
could extract a reasonable working theory of objects as permanent, law
obeying things from raw sense data. You would argue, presumably, that
these months are spent on physical/neurological maturation and the
expression of world-describing genes - where as I would like to
believe they are spent on what I will call, for lack of a more
descriptive word, 'learning.'

Baby's are a useful thing to ask questions about - they're are only
example of what a raw mind looks like - but we should keep in mind
that the human mind is not the only possible solution to the problem
of general intelligence. A brain is a useful thing to think about, but
an artificial mind might bare no resemblance to any natural
neurology. The design process and the engineering constraints differ
profoundly: Take the example of flight: The Concord, and artificial
bird, has no feathers, nor does it flap its wings for lift.

> This information is not learned from a blank slate at birth. (There
> was no time.) We are born with it. Unless we program it into our
> AI, we should expect the AI to require evolutionary time in order to
> figure these things out. That's time we don't have.

We'll do both: If you want to go work from the top down, more power to
you. I happen to think the bottom is the important bit, but I wish you
nothing but luck. Only time will tell who gets there first.

-matt



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:26 MDT