Re: SITE: Coding a Transhuman AI 2.0a

From: Matt Gingell (mjg223@is7.nyu.edu)
Date: Sat May 20 2000 - 15:25:48 MDT


Dan Fabulich wrote:

> Matt Gingell wondered:
> ...

> BTW, the fact that no such Holy Grail exists also provides a plausible
> explanation as to why AI has failed so often in the past. In an
> important sense, were you right about what intelligence is like, AI
> would be easier than it is harder.

How many attempts at heavier than air flight failed before we figured
that out? Just because we haven't found an answer yet or we haven't
got a fast enough machine to try it (a light enough engine) doesn't
mean it isn't out there. We've only had computers for 50 years and
already we've done things that even a hundred years ago would have
been generally thought impossible. Give AI a break. Obviously there
are huge holes left to be filled and principles yet to be discovered,
but it's a very young science.

> ...

> I can see that it would disappoint you if the answer turned out to be
> that this feature emerged in our brains only due to some contingent
> evolutionary process.

Yes, it'd disappoint me. I don't want to think of the minds as nothing
but an ad hoc, idiosyncratic pile of junk spaghetti code. Of course,
I recognize the facility of believing things because you'd like them
to be true, and acknowledge that you may well turn out to be right. I
don't think though there's anything wrong with a scientific enquiry
being guided a personal sense of elegance and esthetics.

> ...

> Look, suppose you WERE somehow able to map out a somewhat
> comprehensive list of possible conceptual schemes which you would use
> to categorize the "raw sense data." How could you algorithmically
> determine which of these conceptual schemes worked better than some
> others? Any others? Our ancestors had a way: use it as a rule for
> action, see if it helps you breed. You and your machines, even at
> 10^21 ops/sec, would have nothing to test your values against.

When Newton developed with theory of gravitation, did he iterate
through the space of as possible physical laws till he found one that
matched his data? You seem to still be convinced that learning,
discovering patterns in facts, is a blind search.

A concept scheme is a theory about the world, a model of the way
things in the world work and the sort of laws they obey. Some ways of
looking at the world are objectively better than others, regardless of
their utility as a tool for perpetuating your own genes. Intelligence
is that which extracts those models from raw data - Feedback with the
world is a useful tool for finding those them, but it isn't the
only one.

This is a vision of learning as scientific discovery. There are an
infinite number of theories which could account for, say, astronomical
observations of the visible planets in the solar system, and yet we
somehow distinguish good ones from bad ones. Epicycles predict the
apparent motion of the planets with as much precision as Kepler's
laws, yet we all agree that the second theory is, in some difficult to
make rigorous way, more elegant and more 'correct' than assuming the
Earth stands still.

Heres a simple example of the sort of thing I'm talking about:

Suppose there exists some Vast set of character strings, and I've
posed you the challenge of characterizing that set from some finite
number of examples. The sample you've got contains instances like: (ab),
(aab), (abb), (aaabbb), (abbb), etc.

The answer is obvious, or at least it would be with enough samples:
this is the set of strings beginning with one or more instances of 'a'
followed by one or more instances of 'b.' Of course, any other answer
is defensible: you could say we're looking at the set of all strings
built of characters in the alphabet and we just got a misleading
sample. Or you could say this set only contains the examples you've
seen. Both those answers are wrong though, in the same way epicycles
are wrong. It's my contention that there exists some general algorithm
for determining those 'right' answers.

> Consider a search space in which you're trying to find local maximums.
> Now imagine trying to do it without any idea of the height of any
> point in the space. Now try throwing 10^100 ops at the project.
> Doesn't help, does it?

You do have a criterion: The representation of a theory should be as
small as possible, and it should generalize as little as possible while
describing as many examples as possible. It's Occam's Razor. I'll read
up on seed AI if you agree to read up on unsupervised learning
(learning without feedback or tagged examples.)

> I see no reason to think that there is a "raw mind." There are some
> minds, such as they are, but there is nothing out there to purify.
> (Eliezer and others call this mythical purificant "mindstuff.")

A heart is a pump, an eye is a camera, why can't a brain be a baroque
biological instance of something simpler and more abstract?

> To the extent that I can make this analogy in a totally non-moral way
> (I'll try), this is the difference between fascist eugenics and
> transhuman eugenics. Fascist eugenics tries to breed out impurities,
> to bring us back to the one pure thing at our center; transhuman
> eugenics works to create something completely different, in nobody's
> image in particular.
>
> [Again, I don't use this to imply anything morally about you or anyone
> who agrees with you, but merely to draw the distinction.]

Thanks for qualifying that, but it's still a hell of a loaded
analogy. I prefer to think of blank-slate intelligence as a
egalitarian notion: we are all the same, differing only in our
experience and hardware resources, be we human, alien, or
machine. The politics is irrelevant to the question, of course, but
I'd still rather not be called a Nazi.

-matt



This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:11:27 MDT