Re: Mike Perry's work on self-improving AI

Eliezer S. Yudkowsky (
Wed, 08 Sep 1999 06:52:21 -0500

Matt Gingell wrote:
> Doesn't it seem though that there must be more to it than that? You need some way of forgetting or pruning the past or you'll drown in irrelevant detail. Iím reminded of a (possibly apocryphal) anecdote about a mental patient with a photographic memory who couldnít tell whether there was a glass on his bedside table, or he was remembering the one that was there yesterday.
> -matt

In _Coding a Transhuman AI_, I argue that a useful system of symbols and memory requires context-sensitive reconstruction from (some presumably more compact) coding, not just reloading a precise recording of the last set of data structures. For example, if "cat" just loads in a picture of every cat I've ever seen, how do I imagine a "purple cat"? And once that infrastructure is in place - in fact, I suspect that the infrastructure evolved that way to begin with - the tremendous storage capacity necessary to store precise images doesn't seem worth the evolutionary candle.

The legendary theory that all memories are stored subconsciously seems to me like obvious bull. I recall, for example, the experiment showing that (gaussian) women are better than (gaussian) men at memorizing the location of objects, either relative or in spatial arrays; 70% better if you're testing casual, rather than deliberate, memory. I'm not getting into gender politics, but I do want to point out that a differential selection pressure implies a selection pressure, and one strong enough to get a 70% performance improvement; if we really have perfect memories stored, why haven't such selection pressures resulted in perfect performance?

           Eliezer S. Yudkowsky
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way