.
Jürgen Schmidhuber's page (2005) on
Cogbotlab
tum's hand

STATISTICAL
ROBOTICS

& THE ULTIMATE PROBABILISTIC APPROACH
Local links to statistical machine learning approaches:

1. Reinforcement learning

2. Universal Bayesian learners

3. Algorithmic probability and Kolmogorov complexity

4. Speed Prior

5. Metalearning

6. TU Munich Cogbotlab

7. Robot car history

8. Robot learning

9. Artificial Intelligence




Fibonacci web design
by J. Schmidhuber, 2004

Statistical robotics applies well-known techniques of statistics and probability theory (previously already widely used in computer vision) to problems of robotics. Typical methods include Kalman filters, EM, Bayesian networks, particle filters, etc. The robot's belief about its current state is a probability density function on the possible states; the belief is continually updated based on new sensory inputs and a prior probabilistic model of the effects of actions.

For example, robot car pioneer Ernst Dickmanns (1980s and 90s) used Kalman filters to deal with uncertain sensor readings of his autonomous vehicles.

Since 1990 or so, much of the work in the area of "probabilistic robotics" has focused on robot localization and map building, triggered by the pioneering work of Durrant-Whyte's group (Kalman filters / simultaneous localization and map building SLAM) as well as Smith et al. Here is a list of original or frequently cited papers of the 1990s, as well as a more recent works:

1. R. Smith, M. Self, and P. Cheeseman. Estimating uncertain spatial relationships in robotics. In I.J. Cox and G.T. Wilfong, editors, Autonomous Robot Vehicles, volume 8, 167-193, 1990.

2. J. Leonard and H. Durrant-Whyte. Mobile Robot Localization by Tracking Geometric Beacons. IEEE Trans. Robotics and Automation 7(3), 1991.

3. J. J. Leonard and H. Durrant-Whyte. Simultaneous map building and localization for an autonomous mobile robot. IEEE International Conference on Intelligent Robot Systems, Osaka, Japan, 1991.

4. W. Burgard, D. Fox, D. Hennig, and T. Schmidt. Estimating the absolute position of a mobile robot using position probability grids. AAAI/IAAI, Vol. 2, 896-901, 1996.

5. G. Dissanayake, P. Newman, S. Clark, H. Durrant-Whyte, and M. Csorba. A solution to the simultaneous localization and map building (SLAM) problem. IEEE Transactions on Robotics and Automation, 17(3):229-241, 2001.

6. M. Beetz, T. Schmitt, R. Hanek, S. Buck, F. Stulp, D. Schroeter, and B. Radig. The AGILO robot soccer team: experience-based learning and probabilistic reasoning in autonomous robot control. Autonomous Robots, 2004.

7. S. Thrun, W. Burgard, D. Fox. Probabilistic Robotics. MIT Press, 2005.

The references to the left are about localization (in older but very similar approaches to vision, an equivalent goal is object tracking). Our long-term goal, however, is to build robots that learn complex action sequences for solving given tasks in unknown environments (localization is but a part of this).

More and more machine learning researchers are currently becoming aware of the fact that there is a universal, Bayesian, theoretically optimal way of doing this, at least if we ignore computation time for the moment. It is based on Solomonoff's universal mixture M of computable probability distributions. If the probabilities of the world's responses to the robot's actions are indeed computable (which everyone assumes), then the robot may predict its future sensory inputs and rewards using M instead of the true but unknown distribution. And according to recent theorems of Hutter (then on Schmidhuber's SNF grant 20-61847), the robot can indeed act optimally by choosing those action sequences that maximize M-predicted reward. This may be dubbed the unbeatable, ultimate statistical approach to robotics - it demonstrates the limits of what's possible. Read more.

What if our notion of optimality takes computation time into account? Then use a Gödel machine equipped with the axioms of probability theory!

Universal AI Speed Prior Best robot car so far (Dickmanns, 1995)