Re: Mike Perry's work on self-improving AI

Eliezer S. Yudkowsky (
Tue, 07 Sep 1999 12:15:14 -0500

Joseph Sterlynne wrote:
> Now, I'm not necessarily challenging the
> idea---just exploring it---but many of seem to think that an upload will
> have no trouble whatsoever with effecting any desired change, including
> substantial upgrades. What guarantee do we have that there is no inherent
> problem in a human-class mind perceiving all of itself? There is of course
> the question of self-observation; it would certainly help to know how much
> control our consciousness really has over the rest of the mind.

You don't modify your entire mind at once. You look at a subsection, then modify that.

If there's one thing that disguishes the field of "transhuman AI" from the graveyard of dreams that marks previous AI, it's that in transhuman AI, the intelligence is complex enough to be divided into parts, or into architecture and content. To quote from yet another unpublished work:

One of the fundamental distinctions between transhuman AI and previous AI is that, while previous AIs are sometimes divided into modular knowledge bases and the central engine, the central engine itself is usually - perhaps even always - such a simple algorithm that it can't be divided into content and architecture. This is, in fact, the same factor that distinguishes the Standard Social Sciences Model from evolutionary psychology - one holds that any content-specificity reduces flexibility by restraining general intelligence, the other that flexibility is the result of having lots and lots of specifity for many different kinds of contents.

EURISKO, despite being crystalline, wasn't a "previous AI", but a transhuman AI - the heuristics provided active problem-solving ability that could be divided into general heuristics and domain-specific heuristics. Thus the general intelligence could act to modify a single

heuristic, and though they were similar enough - in the end, it was a
limited set of heuristics modifying each other - that there was an
eventual sterility problem, it got a longer run-up than any other
program before or since.

There's a subjective factor involved in judging between what constitutes a "knowledge base" and what constitutes "intelligence", but any sane AIer should be able to distinguish them easily. Previous AIers will argue interminably about philosophical definitions and triumphantly point out minor crossovers; also, they'll conflate the distinction between intelligence and knowledge with the distinction between procedural and declarative knowledge. The paradigm of pragmatism lets us ignore them, since the distinction is obvious most of the time. The practical distinction between intelligence and knowledge is easy enough, even when both take the form of declarative data. If the static data is used in analogies and similarity analysis, that's knowledge; if the data is used to direct operations on other data, if it contains the pattern of nontrivial procedures that can operate on data, that's intelligence. Again, those are just practical guidelines, not definitions. ==

           Eliezer S. Yudkowsky
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way