>
> > It's a fact known to anyone who's done practical AI work that a more
> > specialized approach to a given problem is going to be more
> efficient than a
> > more general approach, in almost all cases. Building a real AI thus
> > requires a very delicate balance between generality and
> specialization. One
> > needs, in fact a general intelligence mechanism that can also serve as a
> > sort of "mind OS" on which numerous specialized intelligence
> mechanisms can
> > run. But you've heard this spiel from me before...
>
> It is really an application of the bias-variance dilemma in learning
> theory: a learning system will make mistakes due to inherent biases and
> variance due to lack of learning. A system with no bias will have a
> large variance and requires a lot of training to become useful, but a
> carefully selected bias (= prior knowledge of the domain) can reduce the
> variance quite a bit. The price is of course that now the system is
> biased towards a certain domain and certain mistakes.
Yes, you're saying much the same thing as me. But to be accurately applied
to real Ai problems, learning theory has to be made to take into account
both space and time complexity, and to be gauged in terms of average-case
performance, etc. etc. Not that many of the known formal theorems actually
apply under these realistic assumptions, but the underlying concepts still
pertain.
ben
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:03 MDT