Matt Gingell wrote:
>
> >See http://pobox.com/~sentience/AI_design.temp.html [343K]
> >
> I skimmed your document and, with all due respect, I do not see that your model,
> as I understand it, differs significantly from classical AI. You have a number
> of modules containing domain-specific knowledge mediated by a centralized world
> model. This is a traditional paradigm.
> The macro-scale self improvement you
> envision is not compelling to me – if you’ve written a program that can
> understand and improve upon itself in a novel and open-ended way then you’ve
> solved the interesting part of the problem already.
Precisely. That is, in a nutshell, the entire problem of seed AI. "Write a program that can represent, notice, understand and improve on its local component code and global design paradigms in an open-ended way."
> Could you identify the cardboard box you think AI research is stuck in, and what
> you’d change if you were in charge. (You have 5 years...)
In my mind, I conceptualize "AI" and "transhuman AI" as being two entirely different fields. AI is stuck in the cardboard box of insisting on generalized processing, simplified models, and implementing only a piece of one problem at a time instead of entire cognitive architectures; they're stuck on systems that a small team of researchers can implement in a year.
-- sentience@pobox.com Eliezer S. Yudkowsky http://pobox.com/~sentience/tmol-faq/meaningoflife.html Running on BeOS Typing in Dvorak Programming with Patterns Voting for Libertarians Heading for Singularity There Is A Better Way