... Schmidhuber1
Current address: Dept. of Computer Science, University of Colorado, Campus Box 430, Boulder, CO 80309, USA, yirgan@cs.colorado.edu
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... inputs2
Recently I became aware that Don Mathis had some related ideas (personal communication). A hierarchical approach to sequence generation was pursued by [Miyata, 1988].
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... well3
For instance, we might employ the more limited feedforward networks and a `time window' approach. In this case, the number of previous inputs to be considered as a basis for the next prediction will remain fixed.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... step4
A unique time representation is theoretically necessary to provide $P_{s+1}$ with unambiguous information about when the failure occurred (see also the last paragraph of section 2). A unique representation of the time that went by since the last unpredicted input occurred will do as well.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
... description5
In contrast, the reduced descriptions referred to by [Mozer, 1990] are not unambiguous.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.


Back to Independent Component Analysis page.

Back to Recurrent Neural Networks page