Re: Singularity: Human AI to superhuman

Eliezer S. Yudkowsky (sentience@pobox.com)
Fri, 11 Sep 1998 15:24:00 -0500

Emmanuel Charpentier wrote:
>
> ---"Eliezer S. Yudkowsky" <sentience@pobox.com> wrote:
> >
> > I disagree with the basic concept that human brains are
> > based on neural nets.
>
> :DDD You're joking, aren't you? How many neurones are there in a
> brain again? What else do you propose for memory, processes, learning,
> pain/pleasure taking place in you and me.

Perhaps it would be better to say that "association" is not the foundation of thought. The source of "pattern", in memory, learning, pain and pleasure, does not derive from the associational nature of neural networks, but programs which use neural networks for processing power. A brain is not necessarily built on neurons any more than a spreadsheet is built on silicon atoms. The properties of small human-built neural networks, such as the association of features, will not necessarily show up as high-level properties of the human brain.

> > Human
> > brains use more powerful principles.
>
> You need to give me some hinsight here. I don't see what you mean.

For example, I think that the cerebellum performs some type of constraint propagation, or rather constraint assembly, and that symbolic memory is based on abstracting a set of high-level constraints which the cerebellum assembles. While the constraint propagation is almost certainly optimized on the neural level, there are no "neural networks" I am aware of that perform constraint propagation, since that activity is fundamentally distinct from "association" as we know it.

> Of course, I shall agree with you on the fact that current
> artificial neural network are not really human like, they are still
> yet pattern catchers. But add many more layers, intra layers synapses
> (and surely many other things), and you can code just about anything.

Likewise, my PowerPC 604 can simulate a neural network. But a program running on that neural network would still be based on parallel computing and association, not a Von Neumann architecture and arithmetic. Human neurons are not necessarily as simple as they're made out to be; it has been proposed that each neuron is actually the equivalent of a personal computer. Penrose has proposed that each microtubule dimer is a unit of complex computation and that it uses quantum computation to boot. Association, a very simple feature which appeared in the first Perceptron neural net, is not necessarily the foundation of the brain.

> > My AI is based on neither neural nets, nor rule-based systems.
> Symbols _may_
> > be fundamentally based on association, but they are still only a
> part of an AI architecture.
>
> From what I've read, you mostly think that you need to code all
> abilities and 'somehow' have them work together through some world
> model, central to have all the module communicate between each other.
> But according to me, what you do is simply code a body of features for
> an AI. You don't give it any ability concerning memory, learning,
> imagination, your basic human thingies. Or do you think you simply
> have to add modules whose work will be to 'memorise', 'imagine',
> 'streamline memory' whatever else is not taken care of by the
> domdules? (domaine module)

I do think that memory, abstract thought, reflexive reasoning, and other aspects of consciousness will have to be programmed in deliberately. They will not emerge spontaneously. Some (very basic) aspects may be implicit in every domdule, some forms may be explicit and separate domdules, but they will not appear unless we summon them.

I do not say that coding an AI is a matter of throwing a group of abilities into a pot. Even Lenat, creator of the encyclopedic Cyc, is binding all the tiny facts together with a highly elegant language and many abstract reasoning modules. The challenge in creating a seed AI is 80% core architecture and s0% throwing on more intuitions. The very high creative challenge is purely core architecture, especially the creation of symbols, reflexivity, and world-model synchronization to bind together multiple domdules.

> I don't think you can ever link together an autocad module and an
> OCR module and say "tadam, here I got the base for an AI". What you do
> is put together functionnalities, not integrate them (no matter the
> amount of code).

Not without the core architecture, no. An OCR plus CAD yields OCR+CAD unless there's a synergy, symbols that apply to both domdules, visualizations with components in both models.

> Markov nets would probably do a better job at it, at least it
> allows to 'associate' things together!

I doubt very strongly indeed that the memory/symbolic domdule (equivalent of our hippocampus) could be implemented by a simple Markov net.

> Come on, human body and brain do it all the time. That's what
> happen when you become an expert on a task: you don't need to think
> about it! It's wired!!! And you didn't answer about perfection: you
> can't design perfection into an AI, and have that AI work its way
> around in an unperfect (from our models point of view) universe!!!

You're confusing high-level "perfection" with low-level "perfection". When was the last time your neurons got confused over a matter of philosophy? When did your neurons get bored with firing? You run on an AI Advantage, but you can't use it consciously - can't tell your neurons to multiply two twenty-digit numbers for you, even though they could do it in a second; you have to use your entire brain and probably a paper-and-pencil to perform this simple procedure, and even then you'll drop a digit or two. AIs will still use error-prone high-level conscious thought to solve uncertain problems; they'll simply have the capability of pouring in massive amounts of low-level procedural thought when necessary.

> ...
>
> > No offense, but these arguments always sound like "But fish are the
> result of
> > billions of years of evolution! How could they be outmatched by a
> mere
> > nuclear submarine?"
>
> I still think evolution has come up with solutions that are pretty
> effective, and we will have to somehow copy them before improving
> and/or completely changing design.

I did. Pattern-catchers are copied from the brain and evolution, reflexive traces and symbols and goals and causality and pretty much everything else are copied from the mind. (Although adaptive code is pretty much a programmer's invention, and the goal system was completely reworked, and so on.)

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.