next up previous
Next: INTRODUCTION

DISCOVERING NEURAL NETS WITH
LOW KOLMOGOROV COMPLEXITY AND
HIGH GENERALIZATION CAPABILITY
NEURAL NETWORKS 10(5):857-873, 1997

Jürgen Schmidhuber
IDSIA, Lugano, Switzerland
juergen@idsia.ch http://www.idsia.ch/~juergen

Abstract:

Many neural net learning algorithms aim at finding ``simple'' nets to explain training data. The expectation is: the ``simpler'' the networks, the better the generalization on test data ($\rightarrow$ Occam's razor). Previous implementations, however, use measures for ``simplicity'' that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the ``Bayesian'' kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding ``algorithmically simple'' problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm. For a given problem, solution candidates are computed by efficient ``self-sizing'' programs that influence their own runtime and storage size. The probabilistic search algorithm finds the ``good'' programs (the ones quickly computing algorithmically probable solutions fitting the training data). Simulations focus on the task of discovering ``algorithmically simple'' neural networks with low Kolmogorov complexity and high generalization capability. It is demonstrated that the method, at least with certain toy problems where it is computationally feasible, can lead to generalization results unmatchable by previous neural net algorithms. Much remains do be done, however, to make large scale applications and ``incremental learning'' feasible.

Keywords: Kolmogorov complexity, Levin complexity, Solomonoff-Levin distribution, generalization, universal search, self-sizing programs, neural networks.




next up previous
Next: INTRODUCTION
Juergen Schmidhuber 2003-02-12


Back to Optimal Universal Search page
Back to Program Evolution page
Back to Algorithmic Information page
Back to Speed Prior page