Re: Cognution [was Re: Deep Blue - white paper]

Eliezer S. Yudkowsky (sentience@pobox.com)
Thu, 02 Sep 1999 13:48:52 -0500

mark@unicorn.com wrote:
>
> Eliezer S. Yudkowsky [sentience@pobox.com] wrote:
> >I'm not sure how you're defining "AI" here, but such a process certainly
> >wouldn't be "intelligent". It would not be creative, self-modelling, or
> >capable of representing general content. It wouldn't have goals or a
> >world-model except in the same way a thermostat does. Deep Blue doesn't
> >know that it knows.
>
> Most arguments against AI start by
> claiming that humans have wonderful facilities which computers don't, when
> they have absolutely no proof.
>
> Can you prove that you can do all those things you're claiming that the
> computer can't do? If not, why should I accept this argument?

Not "the computer". Deep Blue. I'm listing a set of features I talked about in "Coding a Transhuman AI". Obviously I think computers can be made to do them. Equally obviously, these are particular features that we have and Deep Blue doesn't, just like we don't have built-in billion-move chess-extrapolation trees.

-- 
           sentience@pobox.com          Eliezer S. Yudkowsky
        http://pobox.com/~sentience/tmol-faq/meaningoflife.html
Running on BeOS           Typing in Dvorak          Programming with Patterns
Voting for Libertarians   Heading for Singularity   There Is A Better Way