Matt Gingell wondered:
> Sure - I'm completely sympathetic to this point of view. Would you
> agree though that we shouldn't aim to hand code a machine that does
> anything a baby can't?
No, I disagree. As a general point, we're trying to code a transhuman
AI. This thing SHOULD be more powerful than we are. This, to a great
extent, is The Whole Point.
But glib answers aside, Eliezer lists a number of things babies can't
do that we'd clearly want a Seed AI to be able to do: develop new
sensory modalities, blend conscious and autonomic thought, overpower
certain technical problems, observe its own thought processes at a
relevant level of detail, and improve upon those processes at a
fundamental level. He also gives a good argument as to why none of
these features are emergent.
> There is a wider, more important, question here though: What is a baby
> born with? What have millions of years of evolution invented? Do we
> have, as I would like to think, a superbly elegant, distributed
> learning and representation forming machine, a general purpose pattern
> extraction engine; or do we have a bunch of rules and hardwired
> concepts with an afterthought of theorem prover on top? Has our
> Darwinian history provided us with a database of rules and
> combinatorics, or has it stumbled across a universal blank slate
> automaton - itself more fit than any single, static apparatus?
Unfortunately, it's just a bunch of rules with logic as an
afterthought. It would be handy if there were a general problem
solver and we had somehow hit upon it. But I see no reason to think
that this is the case. There are a lot of problems that we're quite
obviously bad at, for obvious evolutionary reasons. We might even be
able to see obvious ways to fix some of our most obvious failings, if
only we had the capacity to modify our autonomic thought processes in
an intentional and relevant way. We can't.
BTW, the fact that no such Holy Grail exists also provides a plausible
explanation as to why AI has failed so often in the past. In an
important sense, were you right about what intelligence is like, AI
would be easier than it is harder.
> Surely it's a bit of both - one can render the nature vs. nurture
> dialectic, or any other, bland by saying it's a mix. The question
> I'm posing though is as follows: Is intelligence something special,
> is it something beyond a vast store of facts and rules? Is
> instinctive knowledge necessary, or does instinct simply optimize
> something deeper and more interesting? Can we construct a general
> definition of what intelligence is, independent of it's utility in
> some specific environment - and if we can is it possible to develop
> an instance of that definition which would function 'intelligently'
> no matter what universe we drop it into? My answer is, obviously,
> yes, and finding that abstraction is the proper goal of AI and
> cognitive science research. Think, for instance, about the concept
> 'natural number.' What does it take to extract that from the world?
> Surely it's universal to anything we'd recognize as intelligent -
> but where does it come from, and by what process?
I can see that it would disappoint you if the answer turned out to be
that this feature emerged in our brains only due to some contingent
evolutionary process. Allow me to disappoint you a little further by
suggesting that you read Eliezer's "Algernon's Law," which argues that
intelligence, of the kind in which we're interested, is an
evolutionary *disadvantage*. This gives us every reason to believe
that not only are we NOT the general problem solver you might wish we
were, but that, due to the fact that we tend to behave in a way that
is evolutionarily advantageous, we're barely on the right track.
> Moravec, if I'm remembering correctly, estimates the raw computing
> resources of the human brain at something like 10 teraflops. Give me
> 'a few months' of time on a machine that big, and I'm confident I
> could extract a reasonable working theory of objects as permanent, law
> obeying things from raw sense data. You would argue, presumably, that
> these months are spent on physical/neurological maturation and the
> expression of world-describing genes - where as I would like to
> believe they are spent on what I will call, for lack of a more
> descriptive word, 'learning.'
Actually, I believe that you couldn't do it at all.
Look, suppose you WERE somehow able to map out a somewhat
comprehensive list of possible conceptual schemes which you would use
to categorize the "raw sense data." How could you algorithmically
determine which of these conceptual schemes worked better than some
others? Any others? Our ancestors had a way: use it as a rule for
action, see if it helps you breed. You and your machines, even at
10^21 ops/sec, would have nothing to test your values against.
Consider a search space in which you're trying to find local maximums.
Now imagine trying to do it without any idea of the height of any
point in the space. Now try throwing 10^100 ops at the project.
Doesn't help, does it?
> Baby's are a useful thing to ask questions about - they're are only
> example of what a raw mind looks like - but we should keep in mind
> that the human mind is not the only possible solution to the problem
> of general intelligence. A brain is a useful thing to think about, but
> an artificial mind might bare no resemblance to any natural
> neurology. The design process and the engineering constraints differ
> profoundly: Take the example of flight: The Concord, and artificial
> bird, has no feathers, nor does it flap its wings for lift.
I see no reason to think that there is a "raw mind." There are some
minds, such as they are, but there is nothing out there to purify.
(Eliezer and others call this mythical purificant "mindstuff.")
To the extent that I can make this analogy in a totally non-moral way
(I'll try), this is the difference between fascist eugenics and
transhuman eugenics. Fascist eugenics tries to breed out impurities,
to bring us back to the one pure thing at our center; transhuman
eugenics works to create something completely different, in nobody's
image in particular.
[Again, I don't use this to imply anything morally about you or anyone
who agrees with you, but merely to draw the distinction.]
-Dan
-unless you love someone-
-nothing else makes any sense-
e.e. cummings
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:26 MDT