Re: Mike Perry's work on self-improving AI

Eliezer S. Yudkowsky (
Tue, 07 Sep 1999 12:06:52 -0500

Any system that can't think its way out of a local optimum has insuffient reflectivity or self-awareness to be successful as a seed AI. If the system can't say, "Gee, I guess I'm at a local optimum, better think my way out of it," then it probably lacks the basic cognitive faculties to understand its own design at all, much less improve it.

           Eliezer S. Yudkowsky
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians Heading for Singularity There Is A Better Way