next up previous
Next: 1. INTRODUCTION

$\textstyle \parbox{16cm}{
\par
\noindent
{\Large {\bf A \lq SELF-REFERENTIAL' WEIG...
....}
\par
\vspace{0.5cm}
\par
\noindent
J. Schmidhuber, TUM
\par
\vspace{0.5cm}
}$

ABSTRACT. Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without exception, all previous weight change algorithms have many specific limitations. Is it (in principle) possible to overcome limitations of hard-wired algorithms by allowing neural nets to run and improve their own weight change algorithms? This paper constructively demonstrates that the answer (in principle) is `yes'. I derive an initial gradient-based sequence learning algorithm for a `self-referential' recurrent network that can `speak' about its own weight matrix in terms of activations. It uses some of its input and output units for observing its own errors and for explicitly analyzing and modifying its own weight matrix, including those parts of the weight matrix responsible for analyzing and modifying the weight matrix. The result is the first `introspective' neural net with explicit potential control over all of its own adaptive parameters. A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals $O(n_{conn} log n_{conn})$, where $n_{conn}$ is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient `introspective' or `self-referential' weight change algorithm, but to show that such algorithms are possible at all.




next up previous
Next: 1. INTRODUCTION
Juergen Schmidhuber 2003-02-21


Back to Metalearning page
Back to Recurrent Neural Networks page