My current research focuses on artificial neural networks, recurrent neural networks, evolutionary algorithms and deep-learning applied to reinforcement learning, control problems, image classification, handwriting and speech recognition.

At IDSIA, I have been working on EU funded projects Humanobs and ``Nascence'', SNF project ``Theory and Practice in Reinforcement Learning 2'' as well as projects funded from industrial cooperation.

## Compressed Neural Networks (an ultimately simple method for indirect encoding)

In state-of-the-art neuroevolution, researchers look for efficient way of encoding the artificial neural networks in strings (genomes) of symbols (genes) in order to reduce the search space of such genomes. Jan's recent research in indirect encoding lead in a method, which describes a neural network weight matrix by a limited set of it's frequency coefficient. The genome consists of a limited set of frequency coefficients that transform to the weight matrix using inverse Fourier-type frequency transform.

The weight matrix get decorrelated after transformed to the frequency domain. The complexity of a genome could be pushed down by encoding the frequency coefficients with a limited number of bits. If the space of coefficients is small (say less than 32 bits), then it could be searched exhaustively starting from the shortest bit strings.

Surprisingly, some of the well known benchmarks could be solved with
networks described by fairly short bit-string. For example,
single-pole balancing controller consisting of one neuron could be
described by **just 1 bit** (positive constant weights matrix), which
means that single-pole benchmark no longer exists.

Manipulation with a flexible octopus arm [Yekutieli et al.] with 10 compartments is a control problem with 82 continuous state variables and 32 control variables. A fully-connected RNN with 3680 weights, can be evolved in 20-dimensional coefficient space in order to learn to reach a target [12d].

So how to do it? Suppose that at every fitness evaluation a genotype is transformed to a phenotype by simply placing the genomes to a weight matrix (or a vector):

The easiest usage of compressed NN is to include the code that takes the genes, pafs them with zeroes and performs inverse DCT. A simple Mathematica code for doing this would then be:

## Visual Reinforcement Learning

The compressed encoding was tested on more challenging RL problem that involve processing of raw vision observations stream. The scaled up RNN controllers were evaluated on two vision-based benchmarks: the visual octopus arm, in which the controller has no access to the arm state variables but is input a raw image stream that contains the arm visualization.

In the second case, it controls a race car to drive along a track using a visual stream captured from the driver's perspective. Such a network has over **1.1 million weights**
To our knowledge this the first attempt to tackle TORCS using vision, and successfully evolve neural
network controllers of this size [13c].

## A Clockwork RNN

A Clockwork RNN [14c], a simple yet powerful modification to the simple RNN (SRN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. It reduces the number of SRN parameters, improves the performance significantly in the sequential tasks, and speeds up the network evaluation. It has been shown experimentally, that it outperforms LSTM [Hochreiter-1997] in audio signal generation and TIMIT spoken word classification.

The goal of sequence generation task is to train a recurrent neural network, that receives no input, to generate a target sequence as accurately as possible. The weights of the network can be seen as a (lossy) encoding of the whole sequence, which could be used for compression. SRN tends to learn the first few steps of the sequence and then generates the mean of the remaining portion, while the output of LSTM resembles a sliding average, and CW-RNN approximates the sequence much more accurately.

## Older Projects

### Temporal Hebbian Self-Organizing Map (THSOM)

Temporal Hebbian Self-organizing Map [7b,8a] is a recurrent extension of Kohonen's SOM. Additional layer of full recurrent connections among the nodes is trained in a Hebbian way. The connections accumulate first-order statistics of transitions between states represented by the neurons, while placing the neurons into centroids of clusters using the input connections. The network clusters the data in both input space and time. The initial THSOM model Hebbian training was improved by [Ferro et al.] introducing neighborhood in the temporal connections (Topological THSOM). The network was further extended for unsupervised learning of a model in RL and named Temporal Network for Transitions (TNT) [11a].

## Publications (bibtex)

### @ IDSIA

- [14a]
- J. Koutnik, J. Schmidhuber, F. Gomez, Online evolution of deep convolutional network for vision-based RL, SAB 2014
- [14b]
- J. Koutnik, F. Gomez, J. Schmidhuber, Evolving deep unsupervised convolutional networks for vision-based RL, GECCO 2014
- [14c]
- J. Koutnik, K. Greff, F. Gomez, J. Schmidhuber, A clockwork RNN, ICML 2014
- [13a]
- B. Steunebrink, J. Koutnik, K. R. Thorisson, E. Nivel, J. Schmidhuber, Resource-bounded machines are motivated to be ..., AGI 2013
- [13b]
- J. Koutnik, G. Cuccu, J. Schmidhuber, and F. Gomez, Evolving large-scale neural networks for vision-based TORCS, FDG 2013
- [13c]
- J. Koutnik, G. Cuccu, J. Schmidhuber, and F. Gomez. Evolving large-scale neural networks for vision-based RL, GECCO 2013
- [12a]
- T. Glasmachers, J. Koutnik, and J. Schmidhuber, Kernel Representations for Evolving Continuous Functions. EVIN 2012
- [12b]
- F. Gomez, J. Koutnik, and J. Schmidhuber, Compressed networks complexity search, PPSN 2012
- [12c]
- F. Gomez, J. Koutnik, and J. Schmidhuber, Complexity search for compressed neural networks, GECCO 2012
- [12d]
- J. Koutnik, J. Schmidhuber, and F. Gomez, A frequency-domain encoding for neuroevolution, arXiv:1212.6521, 2012
- [11a]
- V. Graziano, Jan Koutnik, and J. Schmidhuber, Unsupervised modeling of partially observable environments, ECML/PKDD 2011
- [11b]
- J. Koutnik, F. Gomez, and J. Schmidhuber, Fourier networks, Technical Report IDSIA-02-11, 2011
- [10a]
- J. Koutnik, F. Gomez, and J. Schmidhuber, Searching for minimal neural networks in Fourier space, AGI 2010
- [10b]
- J. Koutnik, F. Gomez, and J. Schmidhuber, Evolving neural networks in compressed weight space, GECCO 2010
- [9a]
- J. Togelius, S. Karakovskiy, J. Koutnik, and J. Schmidhuber, Super Mario evolution, CIG 2009

### @ CTU in Prague

- [10c]
- P. Kordik, J. Koutnik, J. Drchal, O. Kovarik, M. Cepek, M. Snorek,, Metalearning approach to NN opt., Neural Networks 2010
- [9b]
- Jan Drchal, J. Koutnik, M. Snorek,HyperNEAT controlled robots learn how to drive on roads in sim. env., CEC 2009
- [9c]
- Jan Drchal, O. Kapral, J. Koutnik, M. Snorek, Combining multiple inputs in HyperNEAT mobile agent controller, ICANN 2009
- [9d]
- Z. Buk, J. Koutnik, M. Snorek, NEAT in HyperNEAT substituted with genetic programming, ICANNGA 2009
- [8a]
- J. Koutnik, M. Snorek, Temporal Hebbian self-organizing map for sequences ICANN 2008
- [7a]
- J. Koutnik, M. Snorek, Extraction of Markov chain from temporal Hebbian self-organizing map, IWMSM 2007
- [7b]
- J. Koutnik, Inductive modelling of temporal sequences by means of self-organization, IWIM 2007
- [7c]
- J. Koutnik, M. Snorek,, New trends in simulation of neural networks, EUROSIM 2007
- [7d]
- J. Drchal, P. Kordik, J. Koutnik, Visualization of diversity in computational intelligence methods, ISGI 2007
- [6a]
- J. Koutnik, Roman Mazl, M. Kulich. Building of 3D environment models for mobile robotics using self-organization, PPSN 2006
- [6b]
- R. Trnka, J. Koutnik. Application of the Kohonen's SOM and the GAME in social cognition research, Psychologia 2006
- [6c]
- J. Koutnik, M. Snorek, Self-organizing neural networks for signal recognition, ICANN 2006
- [5a]
- J. Koutnik, M. Snorek, Neural network generating hidden Markov chain, ICANNGA 2005
- [4a]
- J. Koutnik, M. Snorek, Efficient simulation of modular neural networks, EUROSIM 2004
- [4b]
- J. Koutnik, M. Snorek, Single categorizing and learning module for temporal sequences, IJCNN 2004
- [3a]
- J. Kubalik, J. Koutnik. Automatic generation of fuzzy rule based classifiers by evolutionary algorithms, IASM 2003
- [3b]
- J. Koutnik, M. Snorek, Enhancement of categorizing and learning module (CALM) - embedded det. of signal change, ICANN 2003
- [3c]
- J. Kubalik, J. Koutnik, L. J. M. Rothkrantz, Grammatical evolution with bidirectional representation, EuroGP 2003
- [2a]
- J. Brunner, J. Koutnik, SiMoNNe - Simulator of Modular Neural Networks, NNW 2002
- [2b]
- J. Koutnik, J. Brunner, M. Snorek, The GOLOKO neural network for vision - analysis of behavior, ICCVG 2002