>Chris, since you have probably already read Eliezer's comments
>(if you haven't you should follow the links from extropy.org
>to the archives of the last week or so), I'll simply comment
>that some of us are fairly neutral regarding approaches to
>different aspects of AI (and will freely admit our lack of
>knowledge with regard to much of it).
>So, stating my ignorance in advance, I'll simply make a few comments
>on your comments:
>> 1 - GAC is a black box. I have made no disclosures on what it uses for
>> pattern matching [snip]
>While I can understand reasons for doing this, it will leave a bad
>taste in the mouths of many with an academic or open source perspectives.
GAC won't stay a black box forever. I'm doing some work now that I hope to
have in peer review soon that involves processing the Mindpixel Corpus using
a string based SOM.
>> 2 - The primary purpose of GAC is to build a fitness test for humanness
>> binary response domain. This will in the future allow GAC to babysit a
>> evolving artificial consciousness, rewarding and punishing it as needed
>> machine speeds.
>I'd say this statement is flawed from two perspectives. First, if you
>have no verifiction of the "reputation" of your "humanness" sources you
>have no controls on the results. You are going to get much different
>results if your humans are Jesuit priests vs. modern-day neo-nazis.
>Presumably you get a slightly more intelligent and affluent cross
>section of humanity (i.e. the people net-connected and producing
>the inputs to your system). That would seem to imply you are going
>to get a pretty "average" human perspective out of the whole effort.
>Producing more "average" humans isn't of much use from an extropian
>perspective. We have more than enough problems figuring out how to
>feed the ones produced using the regular old-fashioned manufacturing
For now, I'm only really concerned with the items which there is fairly
uniform consensus. The things that vary from Jesuit to neo-nazi don't really
concern me. It's what they share in common that interests me.
> Second, there seems to be an implicit assumption that using
>current "humaneness" can evolve an artificial consciousness. Leaving
>aside the meaning of the suitcase term "consciousness", the
>problem may be that for humans to have reached that state they
>had to go through their entire evolutionary history. Modern humans,
>not knowing how to chip flint or stalk a mammoth, may not be able
>to regenerate "consciousness". All you may end up with is a machine
>able to pass the Turing test, but still not be "conscious".
I disagree. I consider consciousness a communication issue and a subset of
the humaness problem. I firmly believe that once you have a machine that
really passes the Turing Test, you must consider it conscious, or create a
>> Right now, GAC is a 50,000+ term fitness test for humanness. At each
>> one of those points GAC knows what it should expect it were testing an
>> average human, because for each one of those points GAC has made at least
>> measurements of real people.
>That's 20 real "average" people.
>> 3 - Any contradictions in GAC are real contradictions in us. It can't
>> believe anything that hasn't been confirmed by at least 20 people.
>You can get 20 real average people to confirm a belief in God --
>getting a combination of a database and some statistical analysis
>software to do the same doesn't make God any more "real".
But, the human concept of God is very real. I'm not measuring reality. I'm
measuring our shared conception of reality.
>> 4 - GAC is science. Over 8 million actual measurements of human consensus
>> have been made. There are at least two other projects that claim to be
>> collecting human consensus information - CYC and Open Mind [snip]
>I don't know much about Open Mind, and only know a little more about Cyc.
>I would not classify Cyc as trying to collect human "consensus"
Maybe you don't, but Doug Lenat does. Just go to google and search on 'CYC
Lenat consensus' - and see for yourself.
>Human "consensus" information is often wrong and I doubt Doug would be
>building wrong concepts into a common sense database. I'd classify Cyc
>more of an attempt to get the commonly agreed upon as scientifically
>information into a database with an attached reasoning and inference
>I would agree that GAC may be interesting social science, but I
>deeply doubt it will produce a useful path to an advanced
>artificial intelligence. Since an advanced artificial intelligence
>is what most extropians would find of interest, the approach
>isn't likely to find a warm reception here.
You have to start somewhere. I don't have 40,000 PhD's entering data. I have
40,000 normal people. Now of course it's a simple matter for me to give my
users some objective test and have GAC pay more attention to those that
score well... but that's in the future.
(But you probably
>have already figured that out...). What might be interesting
>at some future point in time is to watch a Turing test between
>Cyc and GAC. It could reveal some very interesting insights
>into the many false beliefs that most people hold.
>I'd suggest you consider the problem of how one would build into
>GAC the concepts of "trust" and "reputations". A quick search
> "The Production of Trust in Online Markets" by Peter Kollock
>which references some of the early work by Miller & Drexler.
>I've got little interest in knowing what the average human knows.
>I've got a great interest in knowing what "high" reputation
Me too. There are just far too few of them to be useful at the moment. Don't
worry though. GAC will be around forever. I'm setting it up with its own
income stream and the ability for consensus to decide what happens to it in
This archive was generated by hypermail 2b30 : Fri Oct 12 2001 - 14:39:41 MDT