hal@finney.org wrote:
>
> It sounds like you're saying that the "box" of conventional AI is simply
> that they are working on tractable problems where progress is possible,
> while you would rather build up complex theoretical designs. But designs
> without grounding in practical testing are very risky. The farther you
> go, the more likely you are to have made some fundamental mistake which
> shakes the whole beautiful edifice to the ground.
I have no problem with either toy domains or practical testing, but to make any progress at all, sooner or later you have to attack that toy domain with a toy - but complete - cognitive architecture, not a search tree or a propositional-logic manipulator or whatever. Modern AI is sort of like six different chess-playing programs, one of which simulates pawns, one rooks, one bishops... Actually, bad analogy; that'd be a more workable approach than what they're doing now. Maybe the analogy is to a chess-playing program that only moves pieces around, without playing against an opponent?
Actually, the best analogy is the truth; modern AI is like a modern chess-playing program. No complexity. No self-awareness. No consideration of the different facets of the problem. No symbols. No memory. No reasoning. Just a search tree.
Again, I don't object to toy problems, but at some point you have to attack them with a complete toy human.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way