Re: Singularity: Human AI to superhuman

Eliezer S. Yudkowsky (sentience@pobox.com)
Thu, 10 Sep 1998 20:03:46 -0500

Emmanuel Charpentier wrote:

> The basic concept used is neural network, and more particularly what
> I would call associative net (our persona is made by all the
> associations our brain hold).

The newly-available section on neural nets in _Coding_ generalizes neural nets to element-networks, rather than associative nets. Nor has the problem of association been solved by neural nets; the problem of abstracting similar features is far more complex then the linear feature hierarchies now used.

> This kind of net is very good at recognising patterns, storing
> meaningful data, procedural data, historical data, having emotions
> (it's not a free assumption, we can discuss it), making mistakes,
> learning...

In humans, they are. I disagree with the basic concept that human brains are based on neural nets. Flatworm brains are based on neural nets, and you don't see _them_ doing anything more complex than running for Congress. Human brains use more powerful principles. Deep Blue's chess is based on a high-speed architecture and would have major trouble being exported to a neural network, but Deep Blue is based on "search", not a "Von Neumann architecture".

> I don't think we can come up with a totally new kind of
> architecture, even rule based systems that could lead to some of those
> features would finally be an associative net (you associate premisses
> to conclusions, left hand to right hand of the equation). Or anybody
> has a proposal?

My AI is based on neither neural nets, nor rule-based systems. Symbols _may_ be fundamentally based on association, but they are still only a part of an AI architecture.

> From that point of view, it means that at least one of the AI
> advantage should be dropped:
> "3.The ability to perform complex algorithmic tasks without making
> mistakes, both because of a perfect memory, and because of a lack of
> distractions."

> Humans perform repetitive and boring tasks all the time, my
> heart can tell you!

This wasn't intended as stating that the AI could perform high-level tasks perfectly, only low-level tasks for which there are complete algorithms; stupid, non-flexible procedures. This is still a very powerful advantage, especially since humans can only do it on a high-level, slowly and with mistakes. Deep Blue is stupid, but it is stupid far faster than any human.

> I would say that, IMHO, the first Artficial
> Intelligence we will create will mostly have the same characteristics
> as us. The same defaults, the same qualities, no magic wand. And
> self-enhancement is no better, I don't see why it would make anything
> other than geniuses with any sort of add-on as can be imagined.

Evolution is very different from intelligent design. Linear computers, even a whole bunch of linear computers running over the 'Net, are different from a massively parallel slow computer. Why would AIs be different? Ask rather why they would be the same. The only reason why my AI has any characteristics at all in common is that I was using the human brain for suggestions. Even so, I suspect that the apparent commonalities of architecture are delusive; the AI will use workable features of the human architecture and discard most of the rest, probably winding up with an entirely different form of consciousness.

> And we (as humans in flesh) will probably have a cutting edge for a
> long time: evolution has made us, we are very strongly a part of the
> world, we have instincts (quite entangled sometimes). Those instincts
> can cover everything from pain/pleasure to basic wirings of the brain,
> to algorithms for creating more wirings through what we call games,
> tests, experiments. And we have a body.

No offense, but these arguments always sound like "But fish are the result of billions of years of evolution! How could they be outmatched by a mere nuclear submarine?" Evolution is slow. Evolution is stupid. Indeed, one of the great advantages of AIs is that by adding new intuitions and discarding unworkable ones, they will be far better adapted to today's "abstract" (to humans) environment. They'll outprogram us a hundred times over while the programmers are struggling to read Year 2000 assembly code, Congress is playing petty power squabbles, and the rest of humanity is watching football.

> Well, it is true that AI still (seem to) have the speed/power
> advantage (and possibly eli's others abilities), but do that mean it
> will change all the cards? Lead to that end of history that seems to
> be the singularity???

Yup. Do you have any other questions?

-- 
        sentience@pobox.com         Eliezer S. Yudkowsky
         http://pobox.com/~sentience/AI_design.temp.html
          http://pobox.com/~sentience/sing_analysis.html
Disclaimer:  Unless otherwise specified, I'm not telling you
everything I think I know.