Re: GAC

From: Robert J. Bradbury (bradbury@aeiveos.com)
Date: Mon Jul 02 2001 - 11:54:35 MDT


> I just joined this list after Amara Angelica from KurzweilAI pointed out
> that there was some talk about GAC in this group. I've looked at some of the
> posts and would like to make some comments:

Chris, since you have probably already read Eliezer's comments
(if you haven't you should follow the links from extropy.org
to the archives of the last week or so), I'll simply comment
that some of us are fairly neutral regarding approaches to
different aspects of AI (and will freely admit our lack of
knowledge with regard to much of it).

So, stating my ignorance in advance, I'll simply make a few comments
on your comments:
 
> 1 - GAC is a black box. I have made no disclosures on what it uses for
> pattern matching [snip]

While I can understand reasons for doing this, it will leave a bad
taste in the mouths of many with an academic or open source perspectives.

> 2 - The primary purpose of GAC is to build a fitness test for humanness in a
> binary response domain. This will in the future allow GAC to babysit a truly
> evolving artificial consciousness, rewarding and punishing it as needed at
> machine speeds.

I'd say this statement is flawed from two perspectives. First, if you
have no verifiction of the "reputation" of your "humanness" sources you
have no controls on the results. You are going to get much different
results if your humans are Jesuit priests vs. modern-day neo-nazis.
Presumably you get a slightly more intelligent and affluent cross
section of humanity (i.e. the people net-connected and producing
the inputs to your system). That would seem to imply you are going
to get a pretty "average" human perspective out of the whole effort.
Producing more "average" humans isn't of much use from an extropian
perspective. We have more than enough problems figuring out how to
feed the ones produced using the regular old-fashioned manufacturing
methods. Second, there seems to be an implicit assumption that using
current "humaneness" can evolve an artificial consciousness. Leaving
aside the meaning of the suitcase term "consciousness", the
problem may be that for humans to have reached that state they
had to go through their entire evolutionary history. Modern humans,
not knowing how to chip flint or stalk a mammoth, may not be able
to regenerate "consciousness". All you may end up with is a machine
able to pass the Turing test, but still not be "conscious".

> Right now, GAC is a 50,000+ term fitness test for humanness. At each
> one of those points GAC knows what it should expect it were testing an
> average human, because for each one of those points GAC has made at least 20
> measurements of real people.

That's 20 real "average" people.

> 3 - Any contradictions in GAC are real contradictions in us. It can't
> believe anything that hasn't been confirmed by at least 20 people.

You can get 20 real average people to confirm a belief in God --
getting a combination of a database and some statistical analysis
software to do the same doesn't make God any more "real".

> 4 - GAC is science. Over 8 million actual measurements of human consensus
> have been made. There are at least two other projects that claim to be
> collecting human consensus information - CYC and Open Mind [snip]

I don't know much about Open Mind, and only know a little more about Cyc.
I would not classify Cyc as trying to collect human "consensus" information.
Human "consensus" information is often wrong and I doubt Doug would be
building wrong concepts into a common sense database. I'd classify Cyc
more of an attempt to get the commonly agreed upon as scientifically correct
information into a database with an attached reasoning and inference engine.

I would agree that GAC may be interesting social science, but I
deeply doubt it will produce a useful path to an advanced
artificial intelligence. Since an advanced artificial intelligence
is what most extropians would find of interest, the approach
isn't likely to find a warm reception here. (But you probably
have already figured that out...). What might be interesting
at some future point in time is to watch a Turing test between
Cyc and GAC. It could reveal some very interesting insights
into the many false beliefs that most people hold.

I'd suggest you consider the problem of how one would build into
GAC the concepts of "trust" and "reputations". A quick search
turns up:
  "The Production of Trust in Online Markets" by Peter Kollock
  http://www.sscnet.ucla.edu/soc/faculty/kollock/papers/online_trust.htm
which references some of the early work by Miller & Drexler.

I've got little interest in knowing what the average human knows.
I've got a great interest in knowing what "high" reputation
humans know.

Robert Bradbury



This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:41 MDT