next up previous
Next: Bibliography Up: REINFORCEMENT DRIVEN INFORMATION ACQUISITION Previous: 2. MODEL BUILDING WITH

3. SIMULATIONS OF RDIA

We compared the performance of several RDIA variants as described above to the performance of conventional random exploration (variants of random exploration are the methods employed by most authors).


Table 1: For random search and two RDIA variants, the evolutions of the sum of Kullback-Leibler distances between estimated and true probability distributions are shown. In the beginning, RDIA takes a while to find out where it can expect to learn something. But then it quickly surpasses random search.
# Experiments Random Search RDIA (entropy) RDIA (prob. diff.)
1 $204.93$ $204.93$ $204.93$
1024 $2.97$ $67.73$ $65.49$
2048 $3.40$ $40.59$ $21.98$
4096 $2.74$ $10.57$ $5.30$
8192 $3.72$ $4.08$ $3.88$
16384 $4.11$ $2.44$ $2.30$
32768 $3.43$ $1.27$ $1.44$
65536 $2.03$ $0.76$ $0.88$
131072 $1.58$ $0.54$ $0.59$
262144 $1.07$ $0.35$ $0.35$


A small environment. The first test environment consists of $n = 10$ states. There are $m=10$ possible actions, and 100 possible experiments. The transistion probabilities are:

\begin{displaymath}
p_{ijk} = 1 ~~for~~ i=1, ... 9;~j= 1, ... 9;~k=i;
\end{displaymath}


\begin{displaymath}
p_{ijk} = 1 ~~for~~ i=1, ... 9;~j= 10;~k=i+1;
\end{displaymath}


\begin{displaymath}
p_{ijk} = \frac{1}{10} ~~for~~ i = 10;~j=1, ... 10;~k = 1, ... 10;
\end{displaymath}

and $p_{ijk} = 0$ otherwise. The only state that allows to acquire a lot of information is $S_{10}$. After a while, RDIA (with parameters $\alpha = 0.5$, $\gamma = 0.9$, and $\mu = 0.1$) discovers this and establishes a policy that causes the agent to move as quickly as possible to $S_{10}$ from every other state. Random exploration, however, wastes most of the time on the soon useless (uninformative) examination of the states $S_1$ ... $S_{9}$. This can be seen from table 1, which compares random search and the two RDIA variants that worked best: (1) RDIA based on changes in entropy (equation 2), (2) RDIA based on weighted probability changes. In the beginning, RDIA takes a while to find out where it can expect to learn something. Then it quickly catches on and surpasses random search.

A bigger environment. The second test environment consists of $n = 100$ states. There are $m=100$ possible actions, and 10000 possible experiments. The transistion probabilities are:

\begin{displaymath}
p_{ijk} = 1 ~~for~~ i=1, ... 99;~j= 1, ... 99;~k=i;
\end{displaymath}


\begin{displaymath}
p_{ijk} = 1 ~~for~~ i=1, ... 99;~j= 100;~k=i+1;
\end{displaymath}


\begin{displaymath}
p_{ijk} = \frac{1}{100} ~~for~~ i = 100;~j=1, ... 100;~k = 1, ... 100;
\end{displaymath}

and $p_{ijk} = 0$ otherwise. The information content of the second environment (the sum of the entropies of the true transition probability distributions associated with all state/action pairs) is 460.517019.

For random search and for RDIA based on entropy changes (with parameters $\alpha = 0.5$, $\gamma = 0.9$, and $\mu = 0.1$), table 2 shows the number of time steps required to achieve given entropy values. The only state allowing for acquisition of a lot of information is $S_{100}$. RDIA quickly discovers this and establishes a policy that causes the agent to move as quickly as possible to $S_{100}$ from every other state. Random exploration, in contrast, wastes much of its time on the states $S_1$ ... $S_{99}$. Again, for small entropy margins, the advantage of reinforcement driven information acquisition is not as pronounced as in later stages, because Q-learning needs some time to fix the strategy for performing experiments. As the entropy margin approaches the optimum, however, reinforcement driven information acquisition becomes much faster, by at least an order of magnitude.


Table 2: For random search and for RDIA based on entropy differences, this table shows the number of time steps required to achieve given entropy values. The optimal value (the true information content of the environment) is 460.517019. As the entropy margin approaches the optimum, RDIA becomes much faster. The entry marked ``unknown'' was not computed due to limited computation time.
Goal entropy # Experiments: Random Search #Experiments: RDIA
170.0 $3.0*10^6$ $1.1*10^6$
370.0 $2.9*10^7$ $2.5*10^6$
459.0 $1.6*10^9$ $2.6*10^7$
460.0 unknown $6.8*10^7$


Future work. 1. ``Exploitation/exploration trade-off''. In this paper, exploration was studied in isolation from exploitation. Is there an ``optimal'' way of combining both? For which kinds of goal-directed learning should RDIA be recommended? It is always possible to design environments where ``curiosity'' (the drive to explore the world) may ``kill the cat'', or at least may have a negative influence on exploitation performance. This is illustrated by additional experiments presented in [10]: In one environment described therein, exploration helps to speed up exploitation. But with a different environment, curiosity slows down exploitation. The ``exploitation/exploration trade-off'' remains an open problem.

2. Additional experimental comparisons. It will be interesting to compare RDIA to better competitors than random exploration, like e.g. Kaelbling's Interval Estimation algorithm [5].

3. Function approximators. It also will be interesting to replace the Q-table by function approximators like backprop networks. Previous experimental work by various authors indicates that in certain environments this might improve performance, despite the fact that theoretical foundations of combinations of Q-learning and function approximators are still weak.


next up previous
Next: Bibliography Up: REINFORCEMENT DRIVEN INFORMATION ACQUISITION Previous: 2. MODEL BUILDING WITH
Juergen Schmidhuber 2003-02-28


Back to Active Learning - Exploration - Curiosity page
Back to Reinforcement Learning page