Re: AI

Eliezer S. Yudkowsky (sentience@pobox.com)
Wed, 24 Nov 1999 13:49:49 -0600

Rob Harris wrote:
>
> I've posted this same point a million times, but I'm going to do it again
> anyway - cos it's not getting through. When you build a system to perform a
> certain task, you have to tell it what to do - not what NOT to do. There is
> nothing that does everything and has to be constrained down to a set task.

Hear, hear! This "general motive" fallacy is almost precisely the complement of the "general reasoning" fallacy that prevented AI from investigating domain-specific cognition for decades.

> Some talk of "seed AI" becoming a self aware nemesis of humanity. Crap. You
> see, the idea of "seed AI" is analogous with the evolution of life itself.

Okay, I gotta dispute this. I was the person who invented the term "seed AI". *I* certainly am not worrying about it becoming a nemesis of humanity, and the whole *point* of seed AI is to attain transhuman intelligence, which requires self-awareness (in the reflexive self-modeling, not qualia-bearing sense of the word).

A seed AI is an AI that understands its own source code, and is capable of rewriting that source code, and of rewriting its own architecture. The idea is that the AI redesigns itself to a higher level of intelligence, and then re-redesigns itself with that new intelligence, and so on, until the AI reaches either (1) the limits of available hardware or (2) the intelligence required to create "rapid infrastructure", i.e. nanotechnology.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way