Many neural net learning algorithms aim at
finding ``simple'' nets to explain
training data. The expectation is: the ``simpler'' the networks,
the better the generalization on test data (
Occam's razor).
Previous implementations, however,
use measures for ``simplicity'' that lack the power,
universality and elegance of those based on Kolmogorov complexity
and Solomonoff's algorithmic probability.
Likewise, most previous approaches
(especially those of the ``Bayesian'' kind)
suffer from the problem of choosing appropriate priors.
This paper addresses both issues.
It first reviews some basic concepts of algorithmic
complexity theory relevant to machine learning, and
how the Solomonoff-Levin distribution (or universal
prior) deals with the prior problem. The universal prior leads to
a probabilistic method for finding ``algorithmically
simple'' problem solutions with high generalization capability.
The method is based on Levin complexity (a time-bounded generalization of
Kolmogorov complexity) and inspired by Levin's optimal
universal search algorithm.
For a given problem, solution candidates are computed by
efficient ``self-sizing'' programs
that influence their own runtime and storage size.
The probabilistic search algorithm finds
the ``good'' programs (the ones quickly computing
algorithmically probable solutions fitting the training data).
Simulations focus on the task of discovering
``algorithmically simple'' neural networks with low Kolmogorov
complexity and high generalization
capability.
It is demonstrated that the method, at least with certain
toy problems where it is computationally feasible, can lead to
generalization results unmatchable by
previous neural net algorithms.
Much remains do be done, however, to make large scale applications
and ``incremental learning'' feasible.