next up previous
Next: CONCLUDING REMARKS Up: DISCOVERING NEURAL NETS WITH Previous: ADDITIONAL EXPERIMENTS

OUTLOOK: INCREMENTAL SEARCH

As seen above, the probabilistic search algorithm inspired by universal search can lead to excellent generalization performance. However, the tasks above are not very typical in the sense that the learning system does not receive any feedback about its progress. The success criterion is binary: either the system solves the task, or it doesn't. In ``cruel'' environments providing nothing but such limited evaluative feedback, not much can be done. For such cases, Levin's algorithm is indeed optimal.

Incremental learning. With many typical learning situations in the real world, however, there is more informative feedback. For instance, ``supervised'' gradient-based neural net algorithms like back-prop [Werbos, 1974,LeCun, 1985,Parker, 1985,Rumelhart et al., 1986] make use of information provided by error signals (distances between actual network outputs and target values). Unlike universal search, these algorithms incrementally adjust network weights in an iterative manner: solution candidates found in previous trials serve as a basis for additional improvements. Reinforcement learning algorithms [Watkins, 1989,Dayan and Sejnowski, 1994,Barto et al., 1983,Williams, 1988,Schmidhuber, 1989] receive less informative environmental feedback than supervised learning algorithms (see Barto (1989) for an overview), but they are designed to work in an incremental fashion as well. For instance, they tend to make use of the information provided by the magnitude of the rewards, and by the amount of time between rewarding events. Again, ``good'' solutions build the basis for ``better'' solutions. The same is true for simple hill-climbing and for ``evolutionary'' and ``genetic'' algorithms (GAs).

Previous suggestions. The original universal search procedure as formulated by Levin is not designed for incremental learning situations. Suggestions for extending universal search to deal with incremental learning were made in [Solomonoff, 1986,Paul and Solomonoff, 1991], and also in [Schmidhuber, 1994] -- the first paper presenting simulations based on incremental extensions.

Problems with previous suggestions. In realistic, unknown environments, each event / action / trial / learning process occurring in the life of a learning system may affect performance and learning processes at any later time. This fact is not properly taken into account by previous methods, neither by those mentioned above, nor by existing reinforcement learning algorithms, and not even by naive, inefficient, but supposedly infallible exhaustive search: in general, sequential and systematic generate-and-test of all policy candidates will change the environment, thus changing the conditions for policy candidates considered earlier. In fact, a reasonable performance criterion for such general (but typical) situations was missing.

Performance criterion for incremental learning. Recent work [Schmidhuber, 1996] defined such a criterion: the ``reward acceleration criterion'' (RAC). Although not essential for the message of the current paper, a brief review of the main concepts will follow.

Suppose a reinforcement learner's life in an unknown environment $\cal E$ lasts from time 0 to unknown time $T$. Occasionally $\cal E$ provides real-valued reward. The cumulative reward obtained between time 0 and time $t > 0$ is denoted by $R(t)$ (where $R(0) = 0$). The learner's goal at time $t$ is to use experience to maximize $R(T) - R(t)$.

Between time 0 and $T$, the learner repeats the following cycle over and over again ($\cal A$ denotes a set of possible actions): Select and execute $a \in \cal A$ with probability $P( a \mid \cal I )$, where the internal state $\cal I$ is a vector of variable, real-valued components, and the modifiable policy $P$ is a variable, conditional probability distribution on the possible actions, given current $\cal I$. $a$ may change $\cal E$, $\cal I$, $P$ (environmental inputs are represented by certain components of $\cal I$). Actions that modify $P$ are called learning algorithms. Since their execution probablities also depend on $P$, $P$ essentially controls the way it modifies itself (``self-reference'').

Some actions group learning algorithms and other actions into sequences called ``policy modification processes'' (PMP). The $i$-th PMP in system life is denoted PMP$_i$, starts at time $t^1_i > 0$, ends at $t^2_i < T$, $t^2_i > t^1_i$, and computes a $P$ modification denoted $M_i$.

Certain actions trigger an evaluation of the system's performance so far. Such evaluations may cause policy modifications to be undone (by restoring the previous policy -- in practical implementations, this requires to store previous values of modified policy components on a stack). At a given time $t$ in the learner's life, let $Valid(t)$ denot the set of those previous $t^1_i$ whose corresponding $M_i$ have not been undone yet. If $Valid(t)$ is not empty, then let $v_i ~~ (i \in \{1, 2, \ldots, V(t) \})$ denote the $i$-th valid time, ordered according to size. RAC is satisfied if either $Valid(t)$ is empty (trivial case) or if

\begin{displaymath}
\frac{R(t)}{t}
<
\frac{R(t) - R(v_1)}{t - v_1}
<
\frac{R(t) ...
...{t - v_2}
<
\ldots
<
\frac{R(t) - R(v_{V(t)}) }{t - v_{V(t)}}.
\end{displaymath} (4)

RAC essentially says that each still valid modification corresponds to a long term reward acceleration.

Using stack-based backtracking methods such as those in [Schmidhuber, 1996,Wiering and Schmidhuber, 1996,Zhao and Schmidhuber, 1996,Schmidhuber et al., 1996], one can guarantee that RAC is satisfied each time a new PMP starts (of course, the time consumed by backtracking itself is taken into account by RAC). Moreover, in typical environments, RAC can be satisfied in a non-trivial way. Since the success of a policy modification recursively depends on the success of later modifications for which it is setting the stage, the framework provides a sound basis for ``learning how to learn.'' It is quite general: a wide variety of learning algorithms may be included in the action set $\cal A$. Numerous experiments in the papers mentioned above show its practical feasibility. For instance, in [Wiering and Schmidhuber, 1996], $\cal A$ includes an extension of Levin search for generating the PMPs. This leads to significant performance speed-ups in experiments with sequences of more and more complex tasks.


next up previous
Next: CONCLUDING REMARKS Up: DISCOVERING NEURAL NETS WITH Previous: ADDITIONAL EXPERIMENTS
Juergen Schmidhuber 2003-02-12


Back to Optimal Universal Search page
Back to Program Evolution page
Back to Algorithmic Information page
Back to Speed Prior page