>> Quaenos@aol.com quotation
> Eliezer Yudkowsky quotation
>> So-called AI researchers would perform a better service for
>> themselves and the AI field if they devoted their time and resources to
>> neuroscience.
>
>I disagree; I would substitute "cognitive science" for "neuroscience" in
>the sentence above. We need to work down, not up. What good does it do
>to know how neurons fire?
As we develop our understanding of neuroscience, forays into cognitive science will naturally follow. We've been trying the "work down" approach for the last fifty years with no qualitative progress in the last two decades. I don't think we should dismiss all efforts in cognitive science, but neuroscience is the near-term limiter on the progress in cognitive science.
>Cognitive science is where it's at. Hofstadter and Mitchell's Copycat,
>one of the few real advances in the field (although, alas, not a recent
>one), was created by observing what real people did when they were
>making analogies, and using those observations to deduce what the
>sub-elements of analogies were. They worked down, not up.
I agree with you on this point. If there is worthwhile work being done in AI it is by those attempting to determine the computational bases of creativity. I've not been swayed either way by the stochastic (Copycat) vs. deterministic (SME, SOAR, etc.) models of creativity. Additionally,research into the knowledge requirements for creative systems appears to be worthwhile. But the emphasis on force feeding lots of basic rules or the "hardware" approach seems ill-conceived. The research area that would contribute to AI that, in turn,involves the least amount of philosophical haggling and the most science is neuroscience.
>I don't know about Einstein, but someday I'd like to be the Drexler.
We need the Einstein before we need the Drexler. Einstein ushered a paradigm shift whereas Drexler is simply ruminating within a paradigm. We need the paradigm shift first. This leads me a theorem concerning scientific progress called the Paradigm Shift Singularity Axiom: The ratio of philosophical haggling and nitpickery over the amount of scientific investigation increases over time as the current paradigm reachers its limits. As the ratio becomes asymptotic, a paradigm shift hopefully/usually occurs elsewhise we begin to have "churches" of thought within fields of study.
>I freely admit that most of the design in "Coding a Transhuman AI" is
>directly derived from introspection, and the rest is derived indirectly.
> But it's also possible to get too caught up in trying to duplicate the
>human mind.
Agreed. But given its the only current example we have of a working intelligence, it seems prudent to uncover its secrets when our efforts to create the beast from scratch are ailing something awful.
>The accomplishment is not in creating something with a surface
>similarity to humanity (classical AI) or that uses the same elements as
>humanity (connectionist AI), but in reducing high-level complexity into
>the complex interaction of less complex elements.
If we can isolate the components in the human mind we can optimize them. The random shot in the dark approach of evolution has certainly left huge room for improvement. Since we can't even develop a system intelligent as a tapeworm I think its imprudent to start talking about developing a more efficient version of human-level intelligence.
> Both classical AI and
>connectionist AI take great pride in claiming to have found the
>elements, but they usually toss out the complexity - the elements don't
>interact in any interesting way.
Recasting the element as complexity just rewords the problem but does not given a useful answer. Describe, deduce, formalize all you want but until you have a seed AI or some other tangible product you are still just restating the problem.
Thierry Maxey, Ph.D.
CIO, Seradyne Systems, Inc.