Re GAC

From: Christopher McKinstry (cmckinst@eso.org)
Date: Wed Jul 04 2001 - 03:05:21 MDT


>Eliezer S. Yudkowsky wote:

> Anyway, if GAC is a black box, it says to me that you think a simple
> algorithm lies at the core; if you're playing with SOMs and SRNs, it says
> to me that the internal functionality complexity of GAC is almost nil.

It says nothing of the sort. What I intended to communicate is that you
could not possibly know how GAC works because I haven't disclosed it.
You were speaking as if you knew absolutely what was going on with my
system, which is of course absolutely impossible. You don't have access
to the information to make the statements you made.

And what on earth does the fact that I'm using SOMs and SRNs got to do
with 'internal functionality complexity' or any type of complexity for
that matter? One has nothing to do with the other. I can use a SOM or a
SRN on data of ANY complexity.

 
> > 2 - The primary purpose of GAC is to build a fitness test for humanness in a
> > binary response domain. This will in the future allow GAC to babysit a truly
> > evolving artificial consciousness, rewarding and punishing it as needed at
> > machine speeds.
>
> That certainly isn't what it says on your Website. On your Website, it
> says things along the lines of: GAC! The new revolution in AI! The
> first step towards true artificial consciousness! We're teaching it what
> it means to be human!

Which is true. But read it again and see what you missed! Additionally
read any public
interview I've given (this one for example:
http://slashdot.org/articles/00/07/04/2114223.shtml) It's always been
about building a fitness test. I hate to quote myself, but it looks like
I have to. From http://www.mindpixel.com/About/about.php3:

"Eventually, it is hoped a GAC trained neural network will become
indistinguishable from any human being when presented with any yes/no
question/statement independent of whether or not GAC has seen that
particular question/statement before. GAC's database will also be used
to develop the first true images of the entire human conceptual network;
the first true images of the human mind..."

Or more clearly from http://www.mindpixel.com/Cyc_vs_GAC/cyc_vs_gac.php3

"Remember, GAC is just a database; just a high-res copy of our reality.
But with enough mindpixels, we can make all the connections the Cyc team
is trying to make manually, automatically."

> > 4 - GAC is science. Over 8 million actual measurements of human consensus
> > have been made. There are at least two other projects that claim to be
> > collecting human consensus information - CYC and Open Mind - neither has
> > actually done the science to verify that what is in their databases is
> > actually consensus human fact. It's all hearsay until the each item is
> > presented to at least 20 people (central limit theorem.)
>
> 4.1: I'm not fond of Cyc either. But Cyc isn't claiming to collect human
> consensus information; rather, they are claiming to collect the
> commonsense knowledge that one human might be expected to have. I think
> Cyc has nothing but a bunch of suggestively named LISP predicates. If
> they *were* collecting knowledge, however, what would be relevant would
> not be whether the knowledge was true, or whether it was consensual, but
> whether it duplicated the relevant functional complexity of the
> commonsense knowledge possessed by a single human mind.

You seem to be slipping on your basic reading again. From
http://www.cyc.com/overview.html

"The knowledge base is built upon a core of over 1,000,000 hand-entered
assertions (or "rules") designed to capture a large portion of what we
normally consider consensus knowledge about the world. For example, Cyc
knows that trees are usually outdoors, that once people die they stop
buying things, and that glasses of liquid should be carried
rightside-up."

As well, note that the core rules are binary. Each one cost about $50.00
to make!

 
> 4.2: Performing lots and lots of actual measurements does not make it
> science. To make it science, you need to come up with a central
> hypothesis about AI or human cognition, use the hypothesis to make a
> prediction, and use those lots and lots of measurements to test that
> prediction. Analogously, I would also note that until GAC can use its
> pixels to predict new pixels, it is not "AI" in even the smallest sense;
> it remains a frozen picture, possibly useful as a fitness test for some
> other AI (I disagree), but not intelligent in itself; as unthinking as the
> binay JPEG data of the Mona Lisa.

Not quite. I assume you've heard of observational science? Like the kind
the astronomer over my right should is doing right now?

And a JPEG of the Mona Lisa is not quite as unthinking as you suppose.
If you build a model of the her from random samples, those samples
contain within them information about samples not sampled. After all
it's a holigraphic universe.

As for theory... it's hypertomography. What GAC is is a very large
database of high dimensional prototype vectors. The same math that can
give you 2-d reconstructions of the brain with MRI scanner data can give
you low dimensional projections of the very high dimensional human
conceptual network.

Comparing an unknown vector to all of these vectors allows for a
statistical prediction of the truth value of the vector. The more
prototype vectors, the better the prediction. The catch is, for the math
to work you need millions of prototypes (which is why tomographic
scanners need to make so many samples.)

Read Elman's SRN paper 'Finding Structure in Time' to see just how well
SRNs extract grammatical structure from prototype vectors. It's rather
shocking - and more than a decade old.

Now, once you have a system that can classify true and false vectors, so
what? Well, then it's just like Big Blue - all it really does is
classify chess moves - millions and millions per second until it finds
the best in a given time. You can do the same thing with a system that
can classify vectors true and false. Feed it millions and millions of
random vectors with just the statistics of English - most of the time it
won't be able to classify the vectors as strongly true or strongly
false, because well, they're random garbage. But every once in a while
it will find a vector it can classify and that it can prove it has never
seen before - that vector is an artificial thought.

Chris.



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:41 MDT