## New learning algorithm much faster than standard backprop with optimal learning rate: O(30) : O(1000)

## Gradient descent metalearns online learning algorithm that outperforms gradient descent.

## Metalearning automatically avoids overfitting, since it punishes overfitting online learners just like slow ones: more cumulative errors!

Previous slide | Next slide | Back to first slide | View graphic version |

Back to J. Schmidhuber's Recurrent neural network page