Soon after the birth of modern computer science in the 1930s, two fundamental questions arose:
1. How can computers learn useful programs from experience, as opposed to being programmed by human programmers?
2. How to program parallel multiprocessor machines, as opposed to traditional serial architectures?
Both questions found natural answers in the field of Recurrent Neural Networks (RNNs), which are brain-inspired general purpose computers that can learn parallel-sequential programs or algorithms encoded as weight matrices.
The first NIPS RNNaissance workshop dates back to 2003. Since then, a lot has happened. Some of the most successful applications in machine learning (including deep learning) are now driven by RNNs such as Long Short-Term Memory, e.g., speech recognition, video recognition, natural language processing, image captioning, time series prediction, etc. Through the world's most valuable public companies, billions of people now access this technology through their smartphones and other devices, e.g., in the form of Google voice search or on Apple's iOS. Reinforcement-learning and evolutionary RNNs are solving complex control tasks from raw video input. Many RNN-based methods learn sequential attention strategies.
At this symposium, we will review the latest developments in all of these fields, and focus not only on RNNs, but also on learning machines in which RNNs interact with external memory such as neural Turing machines, memory networks, and related memory architectures such as fast weight networks and neural stack machines. In this context we will also will discuss asymptotically optimal program search methods and their practical relevance.
Our target audience has heard a bit about RNNs, the deepest of all neural networks, but will be happy to hear again a summary of the basics and then delve into the latest advanced topics to see and understand what has recently become possible. All invited talks will be followed by open discussions, with further discussions possible during a poster presentation session. Finally, we will have a panel discussion on the bright future of RNNs, and their pros and cons.