Recurrent neural networks (RNNs)
are currently experiencing a second wave
of attention.
The enthusiasm of the 1980s and early 90s was fueled by the obvious
theoretical advantages of RNNs: unlike feedforward neural networks (FNNs)
and SVMs, RNNs have an internal state which is essential for many temporal
processing tasks. And unlike in HMMs those internal states can take on
both discrete and continuous values.
Practitioners, however, had sobering experiences when they tried to apply
RNNs to speech recognition, robot control, and other important problems
that require sequential processing of information. The first RNNs simply
did not work very well, and their functioning was poorly understood,
since it is inherently more complex than the one of FNNs. The latter
neatly fit into the framework of traditional statistics and information
theory, while the analysis of RNNs requires additional insights, e.g., from
theoretical computer science and algorithmic information theory.
Recent progress, however, has overcome major drawbacks of traditional
RNNs. This progress has come in the form of new architectures, learning
algorithms (including
gradient-based
and reinforcement learning
and evolutionary algorithms),
and also in a better understanding of RNN behavior, which is necessary to
improve and apply RNNs. The new RNNs can learn to solve many previously
unlearnable tasks, including control in partially observable
environments, processing of symbolic data, music improvisation and
composition, and aspects of speech recognition.
The most recent NIPS RNN workshop took place back in 1995 (Neural Networks
for Signal Processing). Now, 8 years later, numerous new developments warrant
another one.
RNN optimists are claiming that we are at the beginning of
an RNNaissance, and that soon we will see more and more applications of the
new RNNs. The pessimists are claiming otherwise. We expect a lively
discussion between optimists and pessimists.
The workshop will start with a brief tutorial and provide a forum for
discussing results and problems. We hope to examine the most promising
future directions, the most important open issues, and new perspectives.
Much time will be devoted to open discussion.
Researchers interested in adaptive sequence processing and control in
partially observable environments.
Submit a short paper or extended abstract to Bram Bakker
(bram@idsia.ch).
Tentative Schedule on Friday Dec 12 2003
Morning sessions: 7:30am-10:30am
- 7:30am Opening remarks and introductory tutorial - B. Bakker
- 8:00am coffee break
8:15am Session 1: Prediction and modeling
- 8:15am Time-warped hierarchical structure in music and speech:
A sequence prediction challenge - D. Eck
(abstract)
- 8:35am Recurrent neural networks - a focus on architectures
- H.G. Zimmermann
(abstract)
- 8:55am Use of input-driven Hidden Markov Models with application to
multi-site precipitation - S. Kirshner
(abstract)
- 9:15am Recurrent nets discover new motifs for protein classification -
S. Hochreiter
(abstract)
- 9:35am coffee break
9:50am Session 2a: Control
- 9:50am
LSTM RNNs for model-free value function-based reinforcement learning in POMDPs
- B. Bakker
(abstract)
- 10:10am Self-organization in a mirror neurons model using RNN
robot experiments and their analysis - J. Tani
(abstract)
Afternoon sessions: 4:00pm-7:00pm
4:00pm Session 2b: Control (continued)
- 4:00pm Recurrent neural networks from learning attractor dynamics -
S. Schaal
(abstract)
- 4:20pm Recurrent networks in engineering - P. Werbos
(abstract)
- 4:40pm coffee break
4:55pm Session 3: Learning algorithms and analysis
- 4:55pm Non-gradient approaches to training recurrent
neural networks - S. Kremer
(abstract)
-
5:15pm
Incremental learning for RNNs: How does it affect performance and
hidden unit activation? - S. Chalup
(abstract)
-
5:35pm
Recurrent/recursive networks as non-autonomous dynamical systems - lessons learnt
- P. Tino
(abstract)
-
5:55pm
Why reinforcement learning requires recurrence: Examples from finance
and competitive games
- J. Moody
(paper)
-
6:15pm coffee break
6:25pm
Panel discussion: Open issues and future directions in RNN research -
B. Bakker (moderator), D. Eck, S. Hochreiter, J. Moody, S. Kremer, J. Tani,
P. Tino, P. Werbos, H.G. Zimmermann
|