next up previous
Next: A SELF-ORGANIZING MULTI-LEVEL PREDICTOR Up: LEARNING COMPLEX, EXTENDED SEQUENCES Previous: INTRODUCTION

HISTORY COMPRESSION

Consider a deterministic discrete time predictor (not necessarily a neural network) whose state at time $t$ is described by an environmental input vector $i(t)$, an internal state vector $h(t)$, and an output vector $o(t)$. The environment may be non-deterministic. At time $0$, the predictor starts with $i(0)$ and an internal start state $h(0)$. At time $t \geq 0$, the predictor computes

\begin{displaymath}o(t)= f ( i(t), h(t)). \end{displaymath}

At time $t>0$, the predictor furthermore computes

\begin{displaymath}h(t) = g ( i(t-1), h(t-1)). \end{displaymath}

All information about the input at a given time $t_x$ can be reconstructed from the knowledge about

\begin{displaymath}t_x, f, g, i(0), h(0), ~and~the~pairs~
(t_s, i(t_s)) ~for~which ~0 < t_s \leq t_x ~and ~o(t_s - 1) \neq i(t_s).
\end{displaymath}

This is because if $o(t)=i(t+1)$ at a given time $t$, then the predictor is able to predict the next input from the previous ones. The new input is derivable by means of $f$ and $g$.

Information about the observed input sequence can be even further compressed beyond just the unpredicted input vectors $i(t_s)$. It suffices to know only those elements of the vectors $i(t_s)$ that were not correctly predicted.

This observation implies that we can discriminate one sequence from another by knowing just the unpredicted inputs and the corresponding time steps at which they occurred. No information is lost if we ignore the expected inputs. We do not even have to know $f$ and $g$. We call this the principle of history compression.

From a theoretical point of view it is important to know at what time an unexpected input occurs; otherwise there will be a potential for ambiguities: Two different input sequences may lead to the same shorter sequence of unpredicted inputs. With many practical tasks, however, there is no need for knowing the critical time steps, as I show later.


next up previous
Next: A SELF-ORGANIZING MULTI-LEVEL PREDICTOR Up: LEARNING COMPLEX, EXTENDED SEQUENCES Previous: INTRODUCTION
Juergen Schmidhuber 2003-02-13


Back to Independent Component Analysis page.

Back to Recurrent Neural Networks page