Reinforcing the Driving Quality of Soccer Playing Robots by Anticipation Alexander Gloye, Fabian Wiesel, Oliver Tenchio and Mark Simon This paper shows how an omnidirectional robot can learn to correct inaccuracies when driving, or even learn to use corrective motor commands when a motor fails, whether partially or completely. Driving inaccuracies are unavoidable, since not all wheels have the same grip on the surface, or not all motors can provide exactly the same power. When a robot starts driving, the real system response differs from the ideal behavior assumed by the control software. Also, malfunctioning motors are a fact of life that we have to take into account. Our approach is to let the control software learn how the robot reacts to instructions sent from the control computer. We use a neural network, or a linear model for learning the robot's response to the commands. The model can be used to predict deviations from the desired path, and take corrective action in advance, thus increasing the driving accuracy of the robot. The model can also be used to monitor the robot and assess if it is performing according to its learned response function. If it is not, the new response function of the malfunctioning robot can be learned and updated. We show, that even if a robot loses power from a motor, the system can re-learn to drive the robot in a straight path, even if the robot is a black-box and we are not aware of how the commands are applied internally.