next up previous
Next: Classifier Systems and the Up: A Local Learning Algorithm Previous: A Local Learning Algorithm

Introduction

Various algorithms for supervised learning in recurrent non-equilibrium networks with non-stationary inputs and outputs have been proposed [Robinson and Fallside, 1987] [Williams and Zipser, 1988] [Pearlmutter, 1988] [Gherrity, 1989] [Rohwer, 1989]. Apart from the fact that these algorithms require explicit teaching signals for the output units, there is a second reason which makes them biologically inplausible: They depend on global computations.

What are the differences between local and global computations in the context of neural networks? We would like to make the distinction between two kinds of local computations in systems consisting of a large number of connected units:

`Local in space' is meant to say that changes of a unit's weight vector should depend solely on activation information from the unit itself and from connected units. The update complexity for a unit's weight vector at a given time should be only proportional to the dimensionality of the weight vector. This implies that for a completely recurrent network the weight update complexity at a given time is $O(n^{2})$ where $n$ is the number of units.

`Local in time' is meant to say that weight changes should take place continually, and that changes should depend only on information about units and weights from a fixed recent time interval. This contrasts to weight changes that take place only after externally defined episode boundaries, which require additional a priori knowledge and in some cases high peaks of computation time. The expression `local in time' corresponds to the notion of `on-line' learning.

As far as we can judge today, biological systems use completely local computations to accomplish complex spatio-temporal credit assignment tasks. However, the local learning rules proposed so far (like Hebb's rule) make sense only if there are no `hidden units'.

In this paper (which is based on [Schmidhuber, 1989]) we want to demonstrate that local credit assignment with `hidden units' is no contradiction by itself, by giving a constructive example: We propose a method local in both space and time which is designed to deal with `hidden units' and with units whose past activations are `hidden in time'.


next up previous
Next: Classifier Systems and the Up: A Local Learning Algorithm Previous: A Local Learning Algorithm
Juergen Schmidhuber 2003-02-21


Back to Reinforcement Learning Economy page
Back to Recurrent Neural Networks page