.
AI Blog of Juergen Schmidhuber

Jürgen Schmidhuber's AI Blog
Pronounce: You_again Shmidhoobuh
What's new?
@SchmidhuberAI


2021

★ Jan 2021: Our five submissions to ICLR 2021 got accepted (probability < 0.002 according to the acceptance rate). Just in time for my birthday.

★ Jan 2021: 10-year anniversary. In 2011, DanNet triggered the deep convolutional neural network (CNN) revolution. Named after my outstanding postdoc Dan Ciresan, it was the first deep and fast CNN to win international computer vision contests, and had a temporary monopoly on winning them, driven by a very fast implementation based on graphics processing units (GPUs). 1st superhuman result in 2011. Now everybody is using this approach.

★ 2017—updated 2021 for 10th birthday of DanNet: History of computer vision contests won by deep CNNs since 2011. DanNet won 4 of them in a row before AlexNet and Resnet (a Highway Net with open gates) joined the party. Today, deep CNNs are standard in computer vision.

★ 2011—updated 2021 for 10th birthday of DanNet: First superhuman visual pattern recognition. At the IJCNN 2011 computer vision competition in Silicon Valley, our artificial neural network called DanNet performed twice better than humans, three times better than the closest artificial competitor, and six times better than the best non-neural method.


2020

★ Dec 2020: 30-year anniversary of planning & reinforcement learning with recurrent world models and artificial curiosity (1990). This work also introduced high-dimensional reward signals, deterministic policy gradients for RNNs, the GAN principle (widely used today). Agents with adaptive recurrent world models even suggest a simple explanation of consciousness & self-awareness (dating back three decades).

★ Dec 2020: 1/3 century anniversary of first publication on metalearning machines that learn to learn (1987). For its cover I drew a robot that bootstraps itself. 1992-: gradient descent-based neural metalearning. 1994-: Meta-Reinforcement Learning with self-modifying policies. 1997: Meta-RL plus artificial curiosity and intrinsic motivation. 2002-: asymptotically optimal metalearning for curriculum learning. 2003-: mathematically optimal Gödel Machine. 2020: new stuff!

★ Dec 2020: 1/3 century anniversary of Genetic Programming for code of unlimited size (1987)

★ Dec 2020: 10-year anniversary of our deep reinforcement learning with policy gradients for LSTM (2007-2010). Applications: DeepMind's Starcraft player (2019); OpenAI's dextrous robot hand & Dota player (2018)—Bill Gates called this a huge milestone in advancing AI.

★ Nov 2020: 15-year anniversary: 1st paper with "learn deep" in the title (2005). Our deep reinforcement learning & neuroevolution solved problems of depth 1000 and more. Soon after its publication, everybody started talking about "deep learning."

★ Oct 2020: 30-year anniversary of end-to-end differentiable sequential neural attention. Plus goal-conditional reinforcement learning. We had both hard attention (1990) and soft attention (1993). Today, both types are very popular.

★ Sep 2020: 10-year anniversary of supervised deep learning breakthrough (2010). No unsupervised pre-training. The rest is history—this deep learning revolution quickly spread from Europe to North America and Asia.

★ Apr/Jun 2020: Critique of ACM's justification of the 2018 Turing Award for deep learning (backed up by 200+ references): 3 Europeans (2 from France, 1 from UK) went to North America where they republished methods & concepts first published by other Europeans whom they did not cite, not even in later surveys. Instead they credited each other, at the expense of the field's pioneers. Similar critique of 2019 Honda Prize—science must not allow corporate PR to distort the academic record.

★ Apr 2020: AI v Covid-19. I made a little cartoon and notes with references and links to the recent ELLIS workshops & JEDI Grand Challenge & other initiatives.

★ Apr 2020: Coronavirus geopolitics. Pandemics have greatly influenced the rise and fall of empires. What will be the impact of the current pandemic?

★ Feb 2020: 2010-2020: our decade of deep learning. The recent decade's most important developments and industrial applications based on our AI, with an outlook on the 2020s, also addressing privacy and data markets.


2017-2019

★ Oct 2019: Deep learning: our Miraculous Year 1990-1991. The deep learning neural networks of our team have revolutionised pattern recognition and machine learning, and are now heavily used in academia and industry. In 2020, we celebrate that many of the basic ideas behind this revolution were published within fewer than 12 months in our "Annus Mirabilis" 1990-1991 at TU Munich.

★ Nov 2018: Unsupervised neural networks fight in a minimax game (1990). To build curious artificial agents, I introduced a new type of active self-supervised learning in 1990. It is based on a duel where one neural net minimizes the objective function maximized by another. GANs are simple special case.

★ Aug 2017: Our impact on the world's most valuable public companies: Apple, Google, Microsoft, Facebook, Amazon... By 2015-17, neural nets developed in my labs were on over 3 billion devices such as smartphones, and used many billions of times per day, consuming a significant fraction of the world's compute. Examples: greatly improved (CTC-based) speech recognition on all Android phones, greatly improved machine translation through Google Translate and Facebook (over 4 billion LSTM-based translations per day), Apple's Siri and Quicktype on all iPhones, the answers of Amazon's Alexa, etc. Google's 2019 on-device speech recognition (on the phone, not the server) is still based on LSTM.

★ 2017-: Many jobs for PhD students and PostDocs


2015-2016

★ May 2015: Highway Networks: First working feedforward neural networks with over 100 layers (updated 2020 for 5-year anniversary). Highway Nets excel at ImageNet & natural language processing & other tasks. Based on LSTM principle. Open their gates to get a well-known special case called Residual Nets.

★ 2014-15: Who invented backpropagation? (Updated 2020 for 1/2 century anniversary.) The "modern" version of backpropagation, the reverse mode of automatic differentiation, was published in 1970 by Finnish master student Seppo Linnainmaa. 2020: 60-year anniversary of Kelley's precursor (1960)

★ Jul 2016: I got the 2016 IEEE CIS Neural Networks Pioneer Award for "pioneering contributions to deep learning and neural networks."

★ Oct 2015: Brainstorm open source software for neural networks

★ Feb 2015: DeepMind's Nature paper and earlier related work

★ Jan 2015: Deep learning in neural networks: an overview


1987-2014

★ July 2013: Compressed network search: First deep learner to learn control policies directly from high-dimensional sensory input using reinforcement learning. (More.)

★ 2013: Sepp Hochreiter's fundamental deep learning problem (1991). (More.)

★ Sep 2012: First deep learner to win a medical imaging contest (cancer detection)

★ Mar 2012: First deep learner to win an image segmentation competition

★ Aug 2011: First superhuman visual pattern recognition.


✯ 1989-: Recurrent neural networks - especially Long Short-Term Memory or LSTM. (More.)

✯ 2011: Preface of book on recurrent neural networks

✯ 2009-: First contests won by recurrent nets (2009) and deep feedforward nets (2010)

✯ 2009-: Winning computer vision contests through deep learning

✯ 2005: Evolving recurrent neurons - first paper with "learn deep" in the title. (More.)

✯ 1991-: Deep learning & neural computer vision


✯ 1991-: First working deep learner based on unsupervised pre-training. (More.)

✯ 1991-: Unsupervised learning

✯ 1991-: Neural heat exchanger


✯ 1987-: Meta-learning or learning to learn


✯ 2002-: Asymptotically optimal curriculum learner

✯ 2003-: Gödel machines as mathematically optimal general self-referential problem solvers

✯ 2000-: Theory of universal artificial intelligence

✯ 2000-: Generalized algorithmic information & Kolmogorov complexity

✯ 2000-: Speed Prior: a new simplicity measure for near-optimal computable predictions

✯ 1996-: Computable universes / theory of everything / generalized algorithmic information


✯ 1989-: Reinforcement learning

✯ 1990-: Subgoal learning & hierarchical reinforcement learning. (More.)

✯ 1990-: Learning attentive vision (more) & goal-conditional reinforcement learning

✯ 1989: Reinforcement learning economies with credit conservation


✯ 1987-: Artificial evolution

✯ 1987-: Genetic programming

✯ 2005-: Evolino


✯ 2002-: Learning robots

✯ 2004-2009: TU Munich Cogbotlab at TUM

✯ 2004-2009: CoTeSys cluster of excellence

✯ 2007: Highlights of robot car history

✯ 2004: Statistical robotics

✯ 2004: Resilient machines & resilient robots

✯ 2000-: AI


✯ 1990-: Artificial curiosity & creativity & intrinsic motivation & developmental robotics. (More.)

✯ 1990-: Formal theory of creativity


✯ 1994-: Theory of beauty and femme fractale

✯ 2001: Lego Art

✯ 2010: Fibonacci web design

✯ 2007: J.S.'s painting of his daughters and related work


✯ 1995-: Switzerland

✯ 2010: A new kind of empire?

✯ 2010: Evolution of national Nobel Prize shares in the 20th century

✯ 2012: Olympic medal statistics & Bolt


✯ 2006: Is history converging? Again? & computer history speedup

✯ 2000s: Einstein & Zuse & Goedel & Turing & Gauss & Leibniz & Schickard & Solomonoff & Darwin & Haber & Bosch & Archimedes & Schwarzenegger & Schumacher & Schiffer

✯ 1981: Closest brush with fame & Bavarian poetry

✯ 1989-: A few old talk videos up to 2015


✯ 2010-: Master's in artificial intelligence

✯ 1987-: Online publications

✯ 1987-: What's new?

✯ 1963-: CV