>How true is the old adage that a computer can only be as clever as the
>programmer who designed it?
That's bogus. If the software I write doesn't end up smarter than I am,
I'll be in trouble come salary review time. Not smarter in general, of
course-- but smarter at the tasks people will buy it for.
>How far has self-learning computing
>progressed?
It's charging right along. There are plenty of people out there making
money from machines learning how to do stuff, and plenty more people in
universities and research labs coming up with new methods. I used to be in
the second category, and am now trying to be in the first. There seems to
be about a ten-year gap between development and commercial deployment,
because that's how long it takes to know if a technique isn't a loser.
We're still a long way from 'general-purpose intelligence'. The present
state of the commercial art is to point a program at a database of a bunch
of stuff that's happened in the past, and then ask it what's the best thing
to do in a new situtation. For example, you might have a database of loan
applications, and whether those people later defaulted on their loans, and
you can ask the program if a given application is likely to be a good one or
a bad one. That's the level of boringness of typical uses of machine learning.
--CarlF