next up previous
Next: Acknowledgments Up: The New AI: General Previous: The Gödel Machine

Conclusion

Recent theoretical and practical advances are currently driving a renaissance in the fields of universal learners and optimal search [59]. A new kind of AI is emerging. Does it really deserve the attribute ``new,'' given that its roots date back to the 1930s, when Gödel published the fundamental result of theoretical computer science [16] and Zuse started to build the first general purpose computer (completed in 1941), and the 1960s, when Solomonoff and Kolmogorov published their first relevant results? An affirmative answer seems justified, since it is the recent results on practically feasible computable variants of the old incomputable methods that are currently reinvigorating the long dormant field. The ``new'' AI is new in the sense that it abandons the mostly heuristic or non-general approaches of the past decades, offering methods that are both general and theoretically sound, and provably optimal in a sense that does make sense in the real world.

We are led to claim that the future will belong to universal or near-universal learners that are more general than traditional reinforcement learners / decision makers depending on strong Markovian assumptions, or than learners based on traditional statistical learning theory, which often require unrealistic i.i.d. or Gaussian assumptions. Due to ongoing hardware advances the time has come for optimal search in algorithm space, as opposed to the limited space of reactive mappings embodied by traditional methods such as artificial feedforward neural networks.

It seems safe to bet that not only computer scientists but also physicists and other inductive scientists will start to pay more attention to the fields of universal induction and optimal search, since their basic concepts are irresistibly powerful and general and simple. How long will it take for these ideas to unfold their full impact? A very naive and speculative guess driven by wishful thinking might be based on identifying the ``greatest moments in computing history'' and extrapolating from there. Which are those ``greatest moments''? Obvious candidates are:

  1. 1623: first mechanical calculator by Schickard starts the computing age (followed by machines of Pascal, 1640, and Leibniz, 1670).
  2. Roughly two centuries later: concept of a programmable computer (Babbage, UK, 1834-1840).
  3. One century later: fundamental theoretical work on universal integer-based programming languages and the limits of proof and computation (Gödel, Austria, 1931, reformulated by Turing, UK, 1936); first working programmable computer (Zuse, Berlin, 1941).

    (The next 50 years saw many theoretical advances as well as faster and faster switches--relays were replaced by tubes by single transistors by numerous transistors etched on chips--but arguably this was rather predictable, incremental progress without radical shake-up events.)

  4. Half a century later: World Wide Web (UK's Berners-Lee, Switzerland, 1990).
This list seems to suggest that each major breakthrough tends to come roughly twice as fast as the previous one. Extrapolating the trend, optimists should expect the next radical change to manifest itself one quarter of a century after the most recent one, that is, by 2015, which happens to coincide with the date when the fastest computers will match brains in terms of raw computing power, according to frequent estimates based on Moore's law. The author is confident that the coming 2015 upheaval (if any) will involve universal learning algorithms and Gödel machine-like, optimal, incremental search in algorithm space [56]--possibly laying a foundation for the remaining series of faster and faster additional revolutions culminating in an ``Omega point'' expected around 2040.


next up previous
Next: Acknowledgments Up: The New AI: General Previous: The Gödel Machine
Juergen Schmidhuber 2003-11-27