Re: Solomonoff vs Bayes

From: Francois-Rene Rideau (fare@tunes.org)
Date: Sun May 13 2001 - 15:18:32 MDT


On Sun, May 13, 2001 at 12:39:30PM -0700, Michael M. Butler wrote:
>> It is an induction principle
>> that neatly formalizes Moore's Law (commonly retro-stated as "pick the
>> simplest available explanation").
> Could you, perhaps, mean "Occam's (or Ockham's) Razor"?
Oops. Yes I did. Doh. Mea culpa.

Since I get an opportunity to complete my previous message,
note that Solomonoff's principle is an abstraction of actual learning
processes, that purposefully neglects the cost of finding what is
"the simplest". More effective induction models would take learning costs
into account. Also, in a more concrete model, asymptotic context-irrelevance
is not very interesting, unless it can be replaced by quantitative
non-asymptotic results. Reversible complexity measures (using reversible
algorithms rather than arbitrary ones) can provide better approximations
that take resource usage into account. When context is important, TAI
scientists have also developed notions of mutual complexity, common
knowledge, etc., to compare the informational content of contexts
relative to some common root. Anyway, what I mean to say is that it's a
fun field of science to explore, in which there are elegant mathematical
results that often prove useful to computer scientists or logicians,
that brings cybernetical insight, and has applications in computer learning.

Darn. When TUNES is reasonably working, I guess I ought to do more of TAI.

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
[ TUNES project for a Free Reflective Computing System | http://tunes.org ]
A computer is like an Old Testament god, with a lot of rules and no mercy.
        -- Joseph Campbell



This archive was generated by hypermail 2b30 : Mon May 28 2001 - 10:00:05 MDT