Re: AI and Asimov's Laws

Eliezer S. Yudkowsky (
Wed, 24 Nov 1999 20:51:43 -0600 wrote:
> This is not what I envision as a non-upgrading AI. First, a non-upgrading AI
> would have little or no conscious control over its own programming. It could
> respond to environmental stimuli, formulate behaviors based on its
> motivational parameters, and implement those behaviors. This is basically
> what humans do. Technically, such an AI could possibly learn about itself,
> if creative enough figure out a way to improve itself, then find some tools
> and do it (if it could remain active while making modifications). This would
> be no different than you or me. However, it might never do so if we program
> it to have an aversion to consciously tinkering with its internal functions
> except for repairs. This would be in my estimation a non-upgrading AI.

Human brains have millions of years of evolution behind them. The only thing that makes it remotely possible to match that immense evolutionary investment with a few years of programming is the recursive-redesign capability of seed AI, reinvesting the dividends of intelligence. I guarantee you that the first artificial intelligence smart enough to matter will be a seed AI, because doing it without seed AI will take at least another twenty years and hundreds or thousands of times as much labor.

           Eliezer S. Yudkowsky
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way