TU Munich Cogbotlab
Jürgen Schmidhuber's course

MACHINE LEARNING
& OPTIMIZATION II

biotech automation

Most important course material for the exams: Algorithmic info basics (Beilage, PDF) as well as Hutter's slides on Solomonoff induction, MDL, etc (PDF) and the "fastest" algorithm (PDF).

JS@TUM

General Overview. This is a follow-up course (Vertiefungsfach) for those familiar with the material of Machine Learning & Bio-Inspired Optimization I, focusing on learning agents interacting with an initially unknown world. From foundations of algorithmic information theory to asymptotically optimal yet infeasible methods showing the ultimate limits of machine learning, and all the way down to practically useful tricks for recurrent nets.

The preliminary outline below does not reflect the order of topics.

Course material. We will often use the blackboard and ppt presentations. In the column below you will find links to supporting material.

Don't worry; you won't have to learn all of this! During the lectures we will explain what's really relevant for the oral exams at the end of the semester. But of course students are encouraged to read more than that! Thanks to Marcus Hutter for some of the material.

rnn
More on Recurrent Neural Networks. RNNs can implement complex algorithms, as opposed to the reactive input / output mappings of feedforward nets and SVMs. We deepen the discussion of ML/BIO I on gradient-based and evolutionary learning algorithms for RNNs.
Preparation:
1. Repeat Chapter 2 of Beilage
2. Repeat LSTM tutorial, PDF.
rnn
swarmbot
Advanced Reinforcement Learning. We discuss methods for learning to maximize reward with non-reactive policies in realistic environments, where an agent needs to memorize previous inputs to act optimally. We also discuss ways of implementing artificial curiosity for improving some robot's world model.
1. Basic material: RL and POMDPs
2. Papers on RL and POMDPs
3. Papers on active exploration.
Komlmogorov
Algorithmic Information Theory / Kolmogorov Complexity. We discuss fundamental notions of information, compressibility, regularity, pattern similarity, etc. This theory provides a basis for optimal universal learning machines.
1. Basic material: chapter 2 of algorithmic info basics (PDF).
2. Generalized algorithmic information or Kolmogorov complexity.
Occam
Optimal universal learners. Bayesian learning algorithms for predicting and maximizing utility in arbitrary unknown environments, assuming only that the environment follows computable probabilistic laws. These methods are optimal if we ignore computation time. They formalize Occam's Razor (simple solutions preferred).
1. Overview: Universal learning machines
2. Basic material: chapters 3, 4, 5 of algorithmic info basics (PDF).
2. Hutter's slides on Solomonoff induction, MDL, etc (PDF).
3. Speed Prior
Solomonoff
snake
Optimal universal search techniques. We discuss general search techniques that are optimally efficient in various senses, including Levin's universal search, Hutter's fastest algorithm for all well-defined problems, and Gödel machines.
1. Basic material: chapter 6 of algorithmic info basics (PDF).
2. Optimal search overview
3. Hutter's "fastest" algorithm for all well-defined problems (PDF).
4. Optimal ordered problem solver
5. Gödel machine.
robot with Goedel machine on robot horse