Using Predictive Representations to Improve Generalization in Reinforcement Learning, Eddie J. Rafols, Mark B. Ring, Richard S. Sutton, Brian Tanner; Proceedings of the 19th International Joint Conference on Artificial Intelligence, 2005.


Abstract
The predictive representations hypothesis holds that particularly good generalization will result from representing the state of the world in terms of predictions about possible future experience. This hypothesis has been a central motivation behind recent research in, for example, PSRs and TD networks. In this paper we present the first explicit investigation of this hypothesis. We show in a reinforcement-learning example (a grid-world navigation task) that a predictive representation in tabular form can learn much faster than both the tabular explicit-state representation and a tabular history-based method.