next up previous
Next: Acknowledgments Up: The New AI: General Previous: OOPS-Based Reinforcement Learning


Recent theoretical and practical advances are currently driving a renaissance in the fields of universal learners and optimal search [56]. A new kind of AI is emerging. Does it really deserve the attribute ``new,'' given that its roots date back to the 1960s, just two decades after Zuse built the first general purpose computer in 1941? An affirmative answer seems justified, since it is the recent results on practically feasible computable variants of the old incomputable methods that are currently reinvigorating the long dormant field. The ``new'' AI is new in the sense that it abandons the mostly heuristic or non-general approaches of the past decades, offering methods that are both general and theoretically sound, and provably optimal in a sense that does make sense in the real world.

We are led to claim that the future will belong to universal or near-universal learners that are more general than traditional reinforcement learners / decision makers depending on strong Markovian assumptions, or than learners based on traditional statistical learning theory, which often require unrealistic i.i.d. or Gaussian assumptions. Due to ongoing hardware advances the time has come for optimal search in algorithm space, as opposed to the limited space of reactive mappings embodied by traditional methods such as artificial feedforward neural networks.

It seems safe to bet that not only computer scientists but also physicists and other inductive scientists will start to pay more attention to the fields of universal induction and optimal search, since their basic concepts are irresistibly powerful and general and simple. How long will it take for these ideas to unfold their full impact? A very naive and speculative guess driven by wishful thinking might be based on identifying the ``greatest moments in computing history'' and extrapolating from there. Which are those ``greatest moments''? Obvious candidates are:

  1. 1640: first mechanical calculator (Pascal, France).
  2. Two centuries later: concept of a programmable computer (Babbage, UK).
  3. One century later: first working programmable computer (Zuse, Berlin), plus fundamental theoretical work on universal integer-based programming languages and the limits of proof and computation (Gödel, Austria, reformulated by Turing, UK). (The next 50 years saw many theoretical advances as well as faster and faster switches--relays were replaced by tubes, tubes by transistors, single transistors by numerous transistors etched on chips--but arguably this was rather predictable, incremental progress without radical shake-up events.)
  4. Half a century later: the World Wide Web (UK's Berners-Lee, Switzerland).
This list seems to suggest that each major breakthrough tends to come twice as fast as the previous one. Extrapolating the trend, optimists should expect another radical change by 2015, which happens to coincide with the date when the fastest computers will match brains in terms of raw computing power, according to frequent estimates based on Moore's law. The author is confident that the coming 2015 upheaval (if any) will involve universal learning algorithms and optimal incremental search in algorithm space--possibly laying a foundation for the remaining series of faster and faster additional revolutions culminating in an ``Omega point'' expected around 2040.

next up previous
Next: Acknowledgments Up: The New AI: General Previous: OOPS-Based Reinforcement Learning
Juergen Schmidhuber 2003-02-04