next up previous
Next: EXPERIMENTS Up: Reinforcement Learning in Markovian Previous: INTRODUCTION

THE ALGORITHM

The sequential version of the algorithm can be obtained in a straight-forward manner from the description of the parallel version below. At every time step, the parallel version is performing essentially the same operations:

In step 1 of the main loop of the algorithm, actions to be performed in the external world are computed. These actions are based on both current and previous inputs and outputs. For all new activations, the corresponding derivatives with respect to all controller weights are updated. In step 2 actions are executed in the external world, and the effects of the current action and/or previous actions may become visible. In step 3 the model network sees the last input and the current output of the controller at the same time. The model network tries to predict the new input without seeing it. Again the relevant gradient information is computed. In step 4 the model network is updated in order to better predict the input (including pleasure and pain) for the controller. The weights of the control network are updated in order to minimize the cumulative differences between desired and actual activations of the pain and pleasure units. `Teacher forcing' [13] is used in the model network (although there is no teacher besides the environment). The partial derivatives of the controller's inputs with respect to the controller's weights are approximated by the partial derivatives of the corresponding predictions generated by the model network.

Notation (the reader may find it convenient to compare with [13]): $C$ is the set of all non-input units of the control network, $A$ is the set of its output units, $I$ is the set of its `normal' input units, $P$ is the set of its pain and pleasure units, $M$ is the set of all units of the model network, $O$ is the set of its output units, $O_{P} \subset O$ is the set of all units that predict pain or pleasure, $W_{M}$ is the set of variables for the weights of the model network, $W_{C}$ is the set of variables for the weights of the control network, $ y_{k_{new}} $ is the variable for the updated activation of the $k$th unit from $M \cup C \cup I \cup P$, $ y_{k_{old}} $ is the variable for the last value of $ y_{k_{new}} $, $w_{ij}$ is the variable for the weight of the directed connection from unit $j$ to unit $i$. $\delta_{ik}$ is the Kronecker-delta, which is 1 for $i=k$ and 0 otherwise, $p_{ij_{new}}^{k}$ is the variable which gives the current (approximated) value of $ \frac{\partial y_{k_{new}} }{\partial w_{ij}}$, $p_{ij_{old}}^{k}$ is the variable which gives the last value of $p_{ij_{new}}^{k}$. If $k \in P$ then $c_k$ is $k$'s desired activation for all times, if $k \in I \cup P $, then $kpred$ is the unit from $O$ which predicts $k$. $\alpha_C$ is the learning rate for the control network, $\alpha_M$ is the learning rate for the model network.

$\mid I \cup P \mid = \mid O \mid $, $\mid O_{P} \mid = \mid P \mid $. Each unit in $I \cup P \cup A$ has one forward connection to each unit in $M \cup C$, each unit in $M$ is connected to each other unit in $M$, each unit in $C$ is connected to each other unit in $C$. Each weight variable of a connection leading to a unit in $M$ is said to belong to $W_{M}$, each weight variable of a connection leading to a unit in $C$ is said to belong to $W_{C}$. For each weight $w_{ij} \in W_{M}$ there are $p_{ij}^{k}$-values for all $k \in M $, for each weight $w_{ij} \in W_{C}$ there are $p_{ij}^{k}$-values for all $k \in M \cup C \cup I \cup P $. The parallel version of the algorithm works as follows:

INITIALIZATION:

$\forall$ $w_{ij} \in W_{M} \cup W_{C}$: $w_{ij} \leftarrow $ random, $\forall$ possible $k$: $ p_{ij_{old}}^{k} \leftarrow 0, p_{ij_{new}}^{k} \leftarrow 0 $ .

$\forall$ $ k \in M \cup C :
y_{k_{old}} \leftarrow 0,
y_{k_{new}} \leftarrow 0. $

$\forall$ $k \in I \cup P $ : Set $ y_{k_{old}} $ according to the current environment, $y_{k_{new}} \leftarrow 0.$

UNTIL TERMINATION CRITERION IS REACHED :

1. $\forall$ $i \in C:
y_{i_{new}} \leftarrow \frac{1}{1 + e^{- \sum_{j} w_{ij} y_{j_{old}} }}$.

$\forall$ $w_{ij} \in W_{C},
k \in C$: $
p_{ij_{new}}^{k} \leftarrow y_{k_{new}}(1 - y_{k_{new}})
( \sum_{l }
w_{kl} p_{ij_{old}}^{l} +
\delta_{ik} y_{j_{old}}) .
$

$\forall$ $k \in C$: $y_{k_{old}} \leftarrow y_{k_{new}}$, $\forall$ $w_{ij} \in W_{C}$ : $p_{ij_{old}}^{k} \leftarrow p_{ij_{new}}^{k}$ .

2. Execute all actions based on activations of units in $A$. Update the environment.

$\forall$ $i \in I \cup P$: Set $y_{i_{new}} $ according to environment.

3. $\forall$ $i \in M:
y_{i_{new}} \leftarrow \frac{1}{1 + e^{- \sum_{j} w_{ij} y_{j_{old}} }}$.

$\forall$ $w_{ij} \in W_{M} \cup W_C, k \in M$: $
p_{ij_{new}}^{k} \leftarrow y_{k_{new}}(1 - y_{k_{new}})
( \sum_{l }
w_{kl} p_{ij_{old}}^{l} +
\delta_{ik} y_{j_{old}}) .
$

$\forall$ $k \in M $: $y_{k_{old}} \leftarrow y_{k_{new}}$, $\forall$ $ w_{ij} \in W_{C} \cup W_M$ : $p_{ij_{old}}^{k} \leftarrow p_{ij_{new}}^{k}$ .

4. $\forall$ $w_{ij} \in W_{M}$: $
w_{ij} \leftarrow w_{ij} + \alpha_{M} \sum_{k \in I \cup P}
(y_{k_{new}} - y_{kpred_{old}}) p_{ij_{old}}^{kpred} .
$

$\forall$ $w_{ij} \in W_{C}$: $
w_{ij} \leftarrow w_{ij} + \alpha_{C} \sum_{k \in P}
(c_k - y_{k_{new}}) p_{ij_{old}}^{kpred} .
$

$\forall$ $k \in I \cup P $: $y_{k_{old}} \leftarrow y_{k_{new}}$, $ y_{kpred_{old}} \leftarrow y_{k_{new}}$, $\forall$ $w_{ij} \in W_{M}:
p_{ij_{old}}^{kpred} \leftarrow 0$,

$\forall$ $w_{ij} \in W_{C}:
p_{ij_{old}}^{k} \leftarrow
p_{ij_{old}}^{kpred} $ .

The algorithm is local in time, but not in space. The computation complexity per time step is $O( \mid W_M \cup W_C \mid \mid M \mid \mid M \cup I \cup P \cup A \mid +
\mid W_C \mid \mid C \mid \mid I \cup P \cup C \mid ). $ In what follows we describe some useful extensions of the scheme.

1. More network ticks than environmental ticks. For highly `non-linear' environments the algorithm has to be modified in a trivial manner such that the involved networks perform more than one (but not more than three) iterations of step 1 and step 3 at each time step. (4-layer-operations in principle can produce an arbitrary approximation of any desired mapping.)

2. Adaptive randomness. Explicit explorative random search capabilities can be introduced by probabilistic controller outputs and `gradient descent through random number generators' [12]. We adjust both the mean and the variance of the controller actions. In the context of the IID algorithm, this works as follows: A probabilistic output unit $k$ consists of a conventional unit $k\mu$ which acts as a mean generator and a conventional unit $k\sigma$ which acts as a variance generator. At a given time, the probabilistic output $ y_{k_{new}} $ is computed by $y_{k_{new}} =
y_{k\mu_{new}} + z
y_{k\sigma_{new}}$, where $z$ is distributed e.g. according to the normal distribution. The corresponding $p_{ij_{new}}^{k}$ must then be updated according to the following rule:

\begin{displaymath}p_{ij_{new}}^{k} \leftarrow
p_{ij_{new}}^{k\mu} +
\frac{ ...
...k\mu_{new}} }
{ y_{k\sigma_{new}} }
p_{ij_{new}}^{k\sigma} .
\end{displaymath}

A more sophisticated strategy to improve the model network is to introduce `adaptive curiosity and boredom'. The priniciple of adaptive curiosity for model-building neural controllers [7] says: Spend additional reinforcement whenever there is a mismatch between the expectations of the model network and reality.

3. Perfect models. Sometimes one can gain a `perfect' model by constructing an appropriate mathematical description of the environmental dynamics. This saves the time needed to train the model. However, additional external knowledge is required. For instance, the description of the environment might be in form of differential or difference equations. In the context of the algorithm above, this means introducing new $p_{ij}^{\eta}$ variables for each $w_{ij} \in W_C$ and each relevant state variable $\eta(t)$ of the dynamical environment. The new variables serve to accumulate the values of $\frac{\partial \eta(t) }{\partial w_{ij}}$. This can be done in exactly the same cumulative manner as with the activations of the model network above.

4. Augmenting the algorithm by TD-methods. The following ideas are not limited to recurrent nets, but are also relevant for feed-forward controllers in Markovian environments.

It is possible to augment model-building algorithms with an `adaptive critic' method. To simplify the discussion, let us assume that there are no pleasure units, just pain units. The algorithm's goal is to minimize cumulative pain. We introduce the TD-principle [10] by changing the error function of the units in $O_P$: At a given time $t$, the contribution of each unit $kpred \in O_P$ to the model network's error is $ y_{kpred}(t) - \gamma y_{kpred}(t+1) - y_k(t+1)$, where $y_i(t)$ is the activation of unit $i$ at time $t$, and $0 < \gamma < 1$ is a discount factor for avoiding predictions of infinite sums. Thus $O_{P}$ is trained to predict the sum of all (discounted) future pain vectors and becomes a vector-valued adaptive critic. (This affects the first $\forall$-loop in step 4 .)

The controller's goal is to minimize the absolute value of $M$'s pain predictions. Thus, the contribution of time $t$ to the error function of the controller now becomes $ \sum_{kpred \in O_{P}} (y_{kpred}(t))^2 $. This affects the second For-loop in step 4 of the algorithm. Note that it is not a state which is evaluated by the adaptive critic component, but a combination of a state and an action. This makes the approach similar to [3]. [7] shows how a recurrent model/controller combination can be used for look-ahead planning without using TD-methods.


next up previous
Next: EXPERIMENTS Up: Reinforcement Learning in Markovian Previous: INTRODUCTION
Juergen Schmidhuber 2003-02-25


Back to Reinforcement Learning POMDP page