"Stirling Westrup" <sti@cam.org> writes:
> I don't deny that. And if some mathematical 'handles' on this stuff
> were to be created, more work would be done on it.
You can do a lot even when the math is not clear, although you sleep
better at night when you have a strong theorem to rest on. I should
know, since I have been working with Bayesian confidence propagation
neural networks with incremental learning and hypercolumns - not even
the usual neural network methods work well on them, so I have to rely
on experiments and my professor's usually irritatingly correct
intuitions.
On top of my wish list would be some theorems about the emergence of
new attractor states in autoassociative networks seen as dynamical
systems - when they pop up, what kinds of bifurcations are there, and
how do they depend on learning.
> I'm personally
> fairly down on the entire scheme of Neural Nets, due to the
> inability of being able to certify a Neural Net solution as being
> stable or accurate (All you can do is statistics, proof is not
> possible), but some recent work (which I don't have a pointer to --
> wish I did) at automatically converting to/from Neural Nets and
> Prolog-like languages holds out some hope for not only being able to
> verify that a Neural Net is gonna do what we hope, but that we will
> eventually gain the tools to analyze natural neural nets.
If you find any references to them, I would be interested in a pointer.
Control isn't everything, but I wouldn't want to fly a neural network
controlled airplane unless the statistics was *very* good.
-- ----------------------------------------------------------------------- Anders Sandberg Towards Ascension! asa@nada.kth.se http://www.nada.kth.se/~asa/ GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
This archive was generated by hypermail 2b29 : Thu Jul 27 2000 - 14:03:09 MDT