next up previous
Next: 2. Implementing Dynamic Curiosity Up: A Possibility for Implementing Previous: A Possibility for Implementing

1. Introduction

In the sequel first an on-line algorithm for reinforcement learning in non-stationary reactive environments is described. The algorithm heavily relies on an adaptive model of the environmental dynamics. The main contribution of this paper (see the second section) is to demonstrate how the algorithm may be naturally augmented by curiosity and boredom, in order to improve the world model in an on-line manner.

Consider an `animat' whose movements are controlled by the output units of a neural network, called the control network, which also receives the animat's sensory perception by means of its input units. The animat potentially is able to produce actions that may change the environmental input (external feedback caused by the `reactive' environment). By means of recurrent connections in the network the animat is also potentially able to internally represent past events (internal feedback).

The animat sometimes experiences different types of reinforcement by means of so-called reinforcement units or pain units that become activated in moments of reinforcement or `pain' (e.g. the experience of bumping against an obstacle with an extremity). The animat 's only goal is to minimize cumulative pain and maximize cumulative reinforcement. The animat is autonomous in the sense that no intelligent external teacher is required to provide additional goals or subgoals for it.

Reinforcement units and pain units are similar to other input units in the sense that they possess conventional outgoing connections to other units. However, unlike normal input units they can have desired activation values at every time. For the purpose of this paper we say that the desired activation of a pain unit is zero for all times, other reinforcement units may have positive desired values. In the sequel we assume a discrete time environment with `time ticks'. At a given time the quantity to be minimized by the learning algorithm is $ \sum_{t,i}(c_i - y_{i}(t))^2 $ where $y_{i}(t)$ is the activation of the $i$th pain or reinforcement unit at time $t$, $t$ ranges over all remaining time ticks still to come, and $c_i$ is the desired activation of the $i$th reinforcement or pain unit for all times.

The reinforcement learning animat faces a very general spatio-temporal credit assignment task: No external teacher provides knowledge about e.g. desired outputs or `episode boundaries' (externally defined temporal boundaries of training intervals). In the sequel it is demonstrated how the animat may employ a combination of two recurrent self-supervised learning networks in order to satisfy its goal.

Munro [2], Jordan [1], Werbos [12], Robinson and Fallside [6], and Nguyen and Widrow [4] used `model networks' for constructing a mapping from output actions of a control network to their effects in in `task space' [1]. The general system described below (an improved version of the variants described in [8] and [10]) also employs an adaptive model of the environmental dynamics for computing gradients of the control network's inputs with respect to the controller weights. The model network is trained to simulate the environment by making predictions about future inputs, including pain and reinforcement inputs. Training of the controller works as follows: Since we cannot propagate input errors (e.g. differences between actual pain signals and desired zero pain signals) `through the environment', we propagate them through the combination of model network and controller. Only the controller weights change during this phase, the weights of the model network have to remain fixed. (Actually, we do not really propagate errors through the networks. We use an algorithm which is functionally equivalent to `back-propagation through time' but uses only computations `going forward in time'. So there is no need for storing past activations.)

Both the control network and the model network are fully recurrent. The algorithm differs from algorithms by other authors in at least some of the following issues: It aims at on-line learning and locality in time, it does not care for `epoch-boundaries', it needs only reinforcement information for learning, it allows different kinds of reinforcement (or pain), and it allows both internal and external feedback with theoretically arbitrary time lags. Unlike with Robinson and Fallside's approach (which is the one that bears the most relationships to the one described here) `credit assignment paths' are provided that lead from pain units back to output units back to all input units and so on. There are also credit assignment paths that lead from input units back to the input units themselves, and from there to the output units. The latter paths are important in the common case when the environment can change even if there were no recent output actions.

The discrete time algorithm below concurrently adjusts the fully recurrent model network and the fully recurrent control network. An on-line version of Robinson and Fallside's Infinite-Input-Duration learning algorithm for fully recurrent networks [5] (first implemented by Williams and Zipser [14]) is used for training both the model network and the combination of controller and model network. The algorithm is a particular instantiation of a more general form and is based on the logistic activation function for all non-input units.

In step 1 of the main loop of the algorithm actions in the external world are computed. Due to the internal feedback, these actions are based on previous inputs and outputs. For all new activations, the corresponding derivatives with respect to all controller weights are updated.

In step 2 actions are executed in the external world, and the effects of the current action and/or previous actions may become visible.

In step 3 the model network tries to predict these effects without seeing the new input. Again the relevant gradient information is computed.

In step 4 the model network is updated in order to better predict the input (including pain) for the controller. Finally, the weights of the control network are updated in order to minimize the cumulative differences between desired and actual activations of the pain and reinforcement units. Since the control network continues activation spreading based on the actual inputs instead of using the predictions of the model network, `teacher forcing' [14] is used in the model network.

One can find various improvements of the systems described in [8] and [10]. For instance, the partial derivatives of the controller's inputs with respect to the controller's weights are approximated by the partial derivatives of the corresponding predictions generated by the model network. Furthermore, the model sees the last input and current output of the controller at the same time.

Notation (the reader may find it convenient to compare with [14]):



$C$ is the set of all non-input units of the control network, $A$ is the set of its output units, $I$ is the set of its `normal' input units, $P$ is the set of its pain and reinforcement units, $M$ is the set of all units of the model network, $O$ is the set of its output units, $O_{P} \subset O$ is the set of all units that predict pain or reinforcement, $W_{M}$ is the set of variables for the weights of the model network, $W_{C}$ is the set of variables for the weights of the control network, $ y_{k_{new}} $ is the variable for the updated activation of the $k$th unit from $M \cup C \cup I \cup P$, $ y_{k_{old}} $ is the variable for the last value of $ y_{k_{new}} $, $w_{ij}$ is the variable for the weight of the directed connection from unit $j$ to unit $i$, $p_{ij_{new}}^{k}$ is the variable which gives the current (approximated) value of $ \frac{\partial y_{k_{new}} }{\partial w_{ij}}$, $p_{ij_{old}}^{k}$ is the variable which gives the last value of $p_{ij_{new}}^{k}$, if $k \in P$ then $c_k$ is $k$'s desired activation for all times, $\alpha_C$ is the learning rate for the control network, $\alpha_M$ is the learning rate for the model network.

$\mid I \cup P \mid = \mid O \mid $, $\mid O_{P} \mid = \mid P \mid $. If $k \in I \cup P $, then $kpred$ is the unit from $O$ which predicts $k$. Each unit from $I \cup P \cup A$ has one forward connection to each unit from $M \cup C$. Each unit from $M$ is connected to each other unit from $M$. Each unit from $C$ is connected to each other unit from $C$. Each weight of a connection leading to a unit in $M$ is said to belong to $W_{M}$. Each weight of a connection leading to a unit in $C$ is said to belong to $W_{C}$. Each weight $w_{ij} \in W_{M}$ needs $p_{ij}^{k}$-values for all $k \in M $. Each weight $w_{ij} \in W_{C}$ needs $p_{ij}^{k}$-values for all $k \in M \cup C \cup I \cup P $.

INITIALIZATION:

For all $w_{ij} \in W_{M} \cup W_{C}$:

begin $w_{ij} \leftarrow $ random,

for all possible $k$: $ p_{ij_{old}}^{k} \leftarrow 0, p_{ij_{new}}^{k} \leftarrow 0 $ end.

For all $ k \in M \cup C :
y_{k_{old}} \leftarrow 0,
y_{k_{new}} \leftarrow 0. $

For all $k \in I \cup P $ :

Set $ y_{k_{old}} $ by environmental perception, $y_{k_{new}} \leftarrow 0.$



FOREVER REPEAT:

1. For all $i \in C:
y_{i_{new}} \leftarrow \frac{1}{1 + e^{- \sum_{j} w_{ij} y_{j_{old}} }}$.

For all $w_{ij} \in W_{C},
k \in C$:

$
p_{ij_{new}}^{k} \leftarrow y_{k_{new}}(1 - y_{k_{new}})
( \sum_{l }
w_{kl} p_{ij_{old}}^{l} +
\delta_{ik} y_{j_{old}})
$

For all $k \in C$:

begin $y_{k_{old}} \leftarrow y_{k_{new}}$,

for all $w_{ij} \in W_{C}$ : $p_{ij_{old}}^{k} \leftarrow p_{ij_{new}}^{k}$ end .

2. Execute all motoric actions based on activations of

units in $A$. Update the environment.

For all $i \in I \cup P$:

Set $y_{i_{new}} $ by environmental perception.

3. For all $i \in M:
y_{i_{new}} \leftarrow \frac{1}{1 + e^{- \sum_{j} w_{ij} y_{j_{old}} }}$.

For all $w_{ij} \in W_{M} \cup W_C, k \in M$:

$
p_{ij_{new}}^{k} \leftarrow y_{k_{new}}(1 - y_{k_{new}})
( \sum_{l } w_{kl} p_{ij_{old}}^{l} + \delta_{ik} y_{j_{old}}) .
$

For all $k \in M $:

begin $y_{k_{old}} \leftarrow y_{k_{new}}$,

for all $ w_{ij} \in W_{C} \cup W_M$ : $p_{ij_{old}}^{k} \leftarrow p_{ij_{new}}^{k}$ end.

4. For all $w_{ij} \in W_{M}$:

$
w_{ij} \leftarrow w_{ij} + \alpha_{M} \sum_{k \in I \cup P}
(y_{k_{new}} - y_{kpred_{old}}) p_{ij_{old}}^{kpred} .
$

For all $w_{ij} \in W_{C}$:

$
w_{ij} \leftarrow w_{ij} + \alpha_{C} \sum_{k \in P}
(c_k - y_{k_{new}}) p_{ij_{old}}^{kpred} .
$

For all $k \in I \cup P $:

begin $y_{k_{old}} \leftarrow y_{k_{new}}$, $ y_{kpred_{old}} \leftarrow y_{k_{new}}$,

for all $w_{ij} \in W_{M}:
p_{ij_{old}}^{kpred} \leftarrow 0$,

for all $w_{ij} \in W_{C}:
p_{ij_{old}}^{k} \leftarrow
p_{ij_{old}}^{kpred} $ end.



By employing probabilistic output units for $C$ and by using `gradient descent through random number generators' [13] we can introduce explicit explorative random search capabilities into the otherwise deterministic algorithm. In the context of the IID algorithm, this works as follows: A probabilistic output unit $k$ consists of a conventional unit $k\mu$ which acts as a mean generator and a conventional unit $k\sigma$ which acts as a variance generator. At a given time, the probabilistic output $ y_{k_{new}} $ is computed by

\begin{displaymath}y_{k_{new}} =
y_{k\mu_{new}} + z
y_{k\sigma_{new}}, \end{displaymath}

where $z$ is distributed e.g. according to the normal distribution. The corresponding $p_{ij_{new}}^{k}$ have to be updated according to the following rule:


\begin{displaymath}p_{ij_{new}}^{k} \leftarrow
p_{ij_{new}}^{k\mu} +
\frac{ ...
...k\mu_{new}} }
{ y_{k\sigma_{new}} }
p_{ij_{new}}^{k\sigma} .
\end{displaymath}

By performing more than one iteration of step 1 and step 3 at each time tick, one can adjust the algorithm to environments that change in a manner which is not predictable by semilinear operations (theoretically three additional iterations are sufficient for any environment).

The algorithm is local in time, but not in space. See [8] for a justification of certain deviations from `pure gradient descent through time', and for a description of how the algorithm can be used for planning action sequences. See [7] and [9] for two quite different entirely local methods.


next up previous
Next: 2. Implementing Dynamic Curiosity Up: A Possibility for Implementing Previous: A Possibility for Implementing
Juergen Schmidhuber 2003-02-28


Back to Active Learning - Exploration - Curiosity page