LEARNING COMPLEX, EXTENDED SEQUENCES USING THE PRINCIPLE
OF HISTORY COMPRESSION
(Neural Computation, 4(2):234-242, 1992)
Previous neural network learning algorithms for sequence processing
expensive and perform poorly
when it comes to long
This paper first introduces a simple principle for
reducing the descriptions of event sequences without loss of information
A consequence of this
principle is that only unexpected
inputs can be relevant.
leads to the construction of neural architectures that learn to
`divide and conquer' by recursively decomposing sequences.
I describe two architectures.
The first functions as
multi-level hierarchy of recurrent networks. The second,
involving only two recurrent networks,
tries to collapse a multi-level
predictor hierarchy into a single recurrent net.
Experiments show that the system can require less computation
per time step
many fewer training sequences than
conventional training algorithms for recurrent nets.
Back to Independent Component Analysis page.
Back to Recurrent Neural Networks page