Re: The most powerfu approach to AI ever discovered?

From: Anders Sandberg (asa@nada.kth.se)
Date: Fri Dec 14 2001 - 11:41:12 MST


On Fri, Dec 14, 2001 at 12:56:32PM -0500, John Clark wrote:
>
>
> http://www.eet.com/story/OEG20011213S0065

Hmm, from the article:

         Under the hood of Cortronics' solution to the classic "sparse"
        coding problem is a feature-attractor architecture in which a
        neural network self-organizes a huge but fixed set of tokens into a
        universal representation of the data set. Each token in the
        universal set is associated with only a few other tokens, mimicking
        the sparse connections among the billions of neurons in the human
        brain. Every entity in the database then becomes a string of tokens
        and their associations.

         By reinforcing the associations between both spatially and
        temporally "contiguous" information, the neural net reinforces the
        connections between often-appearing contiguities in its data
        stream. As a result, a higher-order amalgamation of
        often-synchronized features emerges from the topology of the
        network - namely, "objects" become defined as globs of
        often-appearing-together tokens."

The names are a bit different, but it sounds very much like Hebbian
learning in something like Kanerva's sparse distributed coding. What is
more unclear is how the segmentation happens, but that can probably be
fixed by a more clever learning rule.

Where the article gets totally opaque is how association by similarity into
hierarchies happens; it sounds like this is the really new part, and not
described in any detail.

Hmm, looking at their website
(http://www.hnc.com/innovation_05/cortronics_050105?) they claim the
solution to the reuse problem is significance learning - hey! I have been
suggesting that too :-) But the real secret still seems to be the
abstraction hierarchies / similarity of meaning metrics. How they are set
up is not revealed, and it makes me a bit suspicious: could it be that they
have to be hand-constructed, or have they really invented general
intelligence?

Well, it would be fun if this became The Next Big Thing, but I believe it
when I see it.

-- 
-----------------------------------------------------------------------
Anders Sandberg                                      Towards Ascension!
asa@nada.kth.se                            http://www.nada.kth.se/~asa/
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y



This archive was generated by hypermail 2b30 : Sat May 11 2002 - 17:44:26 MDT