Next: THE PRINCIPLE OF HISTORY
Up: LEARNING UNAMBIGUOUS REDUCED SEQUENCE
Previous: LEARNING UNAMBIGUOUS REDUCED SEQUENCE
The following methods for supervised sequence learning have
been proposed:
Simple recurrent nets [6][2],
time-delay nets (e.g. [1]),
sequential recursive auto-associative memories [13],
back-propagation through time or BPTT [16]
[24] [26],
Mozer's `focused back-prop' algorithm [9],
the IID- or RTRL-algorithm
[14][27], its recent
improvement [20], the
recent fast-weight algorithm [22],
higher-order networks [4],
as well as continuous time methods equivalent to some of the
above [11][12][3].
The following methods for sequence learning by reinforcement
learning have
been proposed: Extended REINFORCE algorithms [25],
the neural bucket brigade algorithm [17], and
recurrent networks adjusted by adaptive critics
[18](see also [7]).
Common to all of these approaches is that they
do not try to selectively focus
on relevant inputs; they waste efficiency and resources
by focussing
on every input.
With many applications,
a second drawback of these methods is the following:
The longer the time
lag between an event and the occurrence of a related error
the less information is carried by the corresponding
error information wandering `back into time' (see [5]
for a more detailed analysis).
[10] and [15] have
addressed
the latter problem but not the former.
Next: THE PRINCIPLE OF HISTORY
Up: LEARNING UNAMBIGUOUS REDUCED SEQUENCE
Previous: LEARNING UNAMBIGUOUS REDUCED SEQUENCE
Juergen Schmidhuber
2003-02-25
Back to Recurrent Neural Networks page