This section will describe experiments validating the usefulness of LS, ALS, and EIRA. To begin with, in an illustrative application with a partially observable maze that has many more states and obstacles than those presented by various authors at ML95, we show how LS by itself can solve POMDPs with huge state spaces but low-complexity solutions (Q-learning variants fail to solve these tasks). Then we present experiments where the task requires to find a stochastic policy for finding multiple goals. We show that ALS can use previous experience to speed-up the process of finding solutions, and that EIRA combined with ALS (for short: ALS+EIRA) can outperform ALS by itself.