Re: Book: "Wild computing" by Ben Goertzel

Eliezer S. Yudkowsky (sentience@pobox.com)
Sat, 27 Mar 1999 20:07:57 -0600

Alexander 'Sasha' Chislenko wrote:
>
> Maybe, somebody could write a review?
> (A positive one, please - Ben is my boss! ;-) )

Well, the Webmind architecture looks an awful lot like my own best idea of an AI architecture as of about two, three years ago. I don't get much more complimentary than that!

At the time, I was still thinking in terms of nodes, nodes that acted on nodes and nodes that referenced nodes, where the programming effort was in creating the basic heuristics and the heuristics that could tell how well heuristics worked; the latter would act upon the former to let pattern arise from the nodes and their interactions. The nodes and the links between nodes would be far more powerful and complex than those of a mere semantic net, and the nodes could examine other nodes and apply heuristics to them. In this sense, I suppose my architecture actually was more general than Webmind; the agents themselves were nodes.

But it was still fundamentally flawed. A mind isn't a semantic net, not even a super-net. That's not to say your *project* is fundamentally flawed, in the sense of not being able to achieve the design goals; the supernet would have been more powerful than any classical AI or neural net on the market, being a cross of both, and having the capability to incorporate both into the nodes. Perhaps the supernet could even go all the way and Transcend, but I doubt it, because I don't trust the ability of pattern to arise in the supernet sufficiently more powerful than what's put in; and where nodes must be simple enough to understand common data formats and each other, they do not have the complexity to create true understanding. Understanding would have to be built on top of the nodes, as a pattern of pattern of patterns. This is the Hofstadterian paradigm applied to semantic nets instead of neural nets, which unfortunately happens to be wrong in the second case and probably unworkable in the first; it is certainly inefficient and hard to document.

Again, to remain complementary, I speak of true intelligence, not achieving the relatively modest design goal of predicting financial data.

>From my current perspective, I would say that a Webmind implements a
specialized (more polite than "crippleware") case of the general version of the Elisson architecture, in which domdules are incarnated as agents, and the function of symbolization/domdule interoperation is incarnated as a limited set of common shared data formats called "nodes". Or to use the RNUI design principle, the agents incarnate a Notice level and the nodes incarnate a Represent level; because domdules are broken up into agents and a set of common data formats used, the problem of interfacing between full domdules does not arrive.

(RNUI - Represent, Notice, Understand, Invent. You have to represent something before you can notice it and "notice something before you can understand it". Drew McDermott invented the NU part of this requirement.)

Unfortunately, this limits nodes to the Represent level and prevents agents from Noticing any fact that cannot be expressed at the Represent level. No node formats for higher-level Notice or low-level Understand data is present. For example, Copycat has Notice-level information of bonds and correspondences that it uses to create analogies, and codelets analogous to agents. But how can Webmind represent bonds if the programmers didn't think of it and provide it as a basic data format? And won't the explosion of data formats cripple node interoperability by forcing the programmers to write O(N^2) pieces of interface code?

And how does the Elisson architecture handle the problem? I'd answer that, but I have to go make dinner. More on this later, and I ought to read the whole book before I go on. Nice book, though.

-- 
        sentience@pobox.com          Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/singul_arity.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.