.
1990: Planning & Reinforcement Learning with Recurrent World Models and Artificial Curiosity

Jürgen Schmidhuber (December 2020)
Pronounce: You_again Shmidhoobuh
AI Blog
@SchmidhuberAI


1990: Planning & Reinforcement Learning with Recurrent World Models and Artificial Curiosity

Abstract. In 2020, we celebrated the 30-year anniversary of our papers on planning & reinforcement learning with artificial neural networks (NNs) [AC90] [PLAN2]. The technical report FKI-126-90 introduced several concepts that are now widely used: (1) planning with recurrent NNs (RNNs) as world models, (2) high-dimensional reward signals (also as inputs for a neural controller), (3) deterministic policy gradients for RNNs, (4) artificial curiosity [AC90b] and intrinsic motivation through NNs that are both generative and adversarial (GANs are a special case [AC20]). In the 2010s, these concepts became popular as compute became cheaper. Our more recent extensions since 2015 [PLAN4-6] [OBJ2-4] address planning in abstract concept spaces and learning to think. Agents with adaptive recurrent world models even suggest a simple explanation of consciousness and self-awareness (dating back three decades [CON16]). I drew the illustrations of [AC90] by hand—some of them are shown here.


Learning & Planning with Recurrent World Models; Adversarial Generative Nets for Artificial Curiosity and related concepts (1990) In February 1990, I published the Technical Report FKI-126-90 [AC90] (revised in November) which introduced several concepts that have become popular in the field of Machine Learning.

The report described a system for reinforcement learning (RL) and planning based on a combination of two recurrent neural networks (RNNs) called the controller and the world model [AC90]. The controller tries to maximize cumulative expected reward in an initially unknown environment. The world model learns to predict the consequences of the controller's actions. The controller can use the world model to plan ahead for several time steps through what's now called a rollout, selecting action sequences that maximise predicted cumulative reward [AC90] [PLAN2]. This integrated architecture for learning, planning, and reacting was apparently published [AC90] [PLAN2] before Rich Sutton's DYNA [DYNA90] [DYNA91]. ([AC90] also cites work on system identification with feedforward NNs [WER87-89] [MUN87] [NGU89] [JOR90] [DL1].) The approach led to lots of follow-up publications, not only in 1990-91 [PLAN2-3] [PHD], but also in recent years, e.g., [PLAN4-6]. See also Sec. 11 of [MIR] and our 1990 application of world models to the learning of sequential attention [ATT] [ATT0-2].

High-dimensional reward signals (1990) as inputs of a recurrent neural net controller Another novelty of 1990 was the concept of high-dimensional reward signals. Traditional RL has focused on one-dimensional reward signals. Humans, however, have millions of informative sensors for different types of pain and pleasure etc. To my knowledge, reference [AC90] was the first paper on RL with multi-dimensional, vector-valued pain and reward signals coming in through many different sensors, where cumulative values are predicted for all those sensors, not just for a single scalar overall reward. Compare what was later called a general value function [GVF]. Unlike previous adaptive critics, the one of 1990 [AC90] was multi-dimensional and recurrent.

Unlike in traditional RL, those reward signals were also used as informative inputs to the controller NN learning to execute actions that maximise cumulative reward. This is also relevant for metalearning. Compare Sec. 13 of [MIR] and Sec. 5 of [DEC] and Sec. 3 & Sec. 6 of [META].

Are such techniques applicable in the real world? For example, can NNs successfully plan to steer real robots? Yes, they can. For example, my former postdoc Alexander Gloye-Förster led FU Berlin's FU-Fighters team that became robocup world champion 2004 in the fastest league (robot speed up to 5m/s) [RES5]. Their robocup robots planned ahead with neural nets, in line with the ideas outlined in [AC90].

In 2005, Alexander and his team also showed how such concepts can be used to build so-called self-healing robots [RES5] [RES7]. They constructed the first resilient machines using continuous self-modeling. Their robots could autonomously recover from certain types of unexpected damage, through adaptive self-models derived from actuation-sensation relationships, used to generate forward locomotion.

The 1990 FKI tech report [AC90] also described basics of deterministic policy gradients for RNNs. Its section "Augmenting the Algorithm by Temporal Difference Methods" combined the Dynamic Programming-based Temporal Difference method [TD] for predicting cumulative (possibly multi-dimensional) rewards with a gradient-based predictive model of the world, to compute weight changes for the separate control network. See also Sec. 2.4 of the 1991 follow-up paper [PLAN3] (and compare [NAN1-5]). Variants of this were used a quarter century later by DeepMind [DPG] [DDPG]. See also Sec. 14 of [MIR] and and Sec. 5 of [DEC].

Predictability Minimization: unsupervised minimax game where one neural network minimizes the objective function maximized by another Finally, the 1990 paper also introduced Artificial Curiosity through Adversarial Generative NNs. As humans interact with the world, they learn to predict the consequences of their actions. They are also curious, designing experiments that lead to novel data from which they can learn more. To build curious artificial agents, the papers [AC90, AC90b] introduced a new type of active unsupervised or self-supervised learning with intrinsic motivation. It is based on a minimax game where one neural net (NN) minimizes the objective function maximized by another NN [R2]. Today, I refer to this duel between two unsupervised adversarial NNs as Adversarial Artificial Curiosity [AC20], to distinguish it from our later types of Artificial Curiosity and intrinsic motivation since 1991 [AC91b-AC20] [PP-PP2].

How does Adversarial Artificial Curiosity work? The controller NN (probabilistically) generates outputs that may influence an environment. The world model NN predicts the environmental reactions to the controller's outputs. Using gradient descent, the world model minimizes its error, thus becoming a better predictor. But in a zero sum game, the controller tries to find outputs that maximize the error of the world model, whose loss is the gain of the controller. Hence the controller is motivated to invent novel outputs or experiments that yield data that the world model still finds surprising, until the data becomes familiar and eventually boring. Compare more recent summaries and extensions of this now popular principle, e.g., [AC09].

That is, in 1990, we already had self-supervised neural nets that were both generative and adversarial (using much later terminology from 2014 [GAN1] [R2]), generating experimental outputs yielding novel data, not only for stationary patterns but also for pattern sequences, and even for the general case of RL. In fact, the popular Generative Adversarial Networks (GANs) [GAN0] [GAN1] (2010-2014) are an application of Adversarial Curiosity [AC90] where the environment simply returns 1 or 0 depending on whether the controller's current output is in a given set [AC20] [R2]. See also Sec. 5 of [MIR] and Sec. 4 of [DEC] and Sec. XVII of [T20]. BTW, note that Adversarial Curiosity [AC90, AC90b] & GANs [GAN0, GAN1] & our Adversarial Predictability Minimization (1991) [PM1-2] are very different from other early adversarial machine learning settings [S59] [H90] which neither involved unsupervised NNs nor were about modeling data nor used gradient descent [AC20].

As I have frequently pointed out since 1990 [AC90], the weights of an NN should be viewed as its program. Some argue that the goal of a deep NN is to learn useful internal representations of observed data—there is even an International Conference on Learning Representations called ICLR. But actually the NN is learning a program (the weights or parameters of a mapping) that computes such representations in response to the input data. The outputs of typical NNs are differentiable with respect to their programs. That is, a simple program generator can compute a direction in program space where one may find a better program [AC90]. Much of my work since 1989 has exploited this fact. See also Sec. 18 of [MIR].

Learning to think

The original controller/model (C/M) planner of 1990 [AC90] focused on naive "millisecond by millisecond planning," trying to predict and plan every little detail of its possible futures. Even today, this is still a standard approach in many RL applications, e.g., RL for board games such as Chess and Go. My more recent work of 2015, however, has focused on abstract (e.g., hierarchical) planning and reasoning [PLAN4-5]. Guided by algorithmic information theory, I described RNN-based AIs (RNNAIs) that can be trained on never-ending sequences of tasks, some of them provided by the user, others invented by the RNNAI itself in a curious, playful fashion, to improve its RNN-based world model. Unlike the system of 1990 [AC90], the RNNAI [PLAN4] learns to actively query its model for abstract reasoning and planning and decision making, essentially learning to think [PLAN4]. Compare also our recent related work on learning (hierarchically) structured concept spaces based on abstract objects [OBJ2-5]. The ideas of [PLAN4-5] can be applied to many other cases where one RNN-like system exploits the algorithmic information content of another. They also explain concepts such as mirror neurons [PLAN4].

A world model extracts compressed spatio-temporal representations which are fed into compact and simple policies trained by evolution (David Ha) In recent work with David Ha of Google (2018) [PLAN6], a world model extracts compressed spatio-temporal representations which are fed into compact and simple policies trained by evolution, achieving state of the art results in various environments.

Finally, what does all of this have to do with the seemingly elusive concepts of consciousness and self-awareness? My first deep learning machine of 1991 [UN0-UN3] emulates aspects of consciousness as follows. It uses unsupervised learning and predictive coding [UN0-UN3] [SNT] to compress observation sequences. A so-called "conscious chunker RNN" attends to unexpected events that surprise a lower-level so-called "subconscious automatiser RNN." The chunker RNN learns to "understand" the surprising events by predicting them. The automatiser RNN uses a neural knowledge distillation procedure of 1991 [UN0-UN2] (see Sec. 2 of [MIR]) to compress and absorb the formerly "conscious" insights and behaviours of the chunker RNN, thus making them "subconscious."

Self-referential problem-solving robot thinking about itself Let us now look at the predictive world model of a controller interacting with an environment as discussed above. It also learns to efficiently encode the growing history of actions and observations through predictive coding [UN0-UN3] [SNT]. It automatically creates feature hierarchies, lower level neurons corresponding to simple feature detectors (perhaps similar to those found in mammalian brains), higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor, the world model will learn to identify regularities shared by existing internal data structures, and generate prototype encodings (across neuron populations) or compact representations or "symbols" (not necessarily discrete) for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole. In particular, compact self-representations or self-symbols are natural by-products of the data compression process, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history through predictive coding, it will profit from creating some sort of internal sub-network of connected neurons computing neural activation patterns representing itself [CATCH] [AC10]. Whenever this representation becomes activated through the controller's planning mechanism of 1990 [AC90] [PLAN2], or through more flexible controller queries of 2015 [PLAN4], the agent is thinking about itself, being aware of itself and its alternative possible futures, trying to create a future of minimal pain and maximal pleasure through interaction with its environment. That's why I keep claiming that we have had simple, conscious, self-aware, emotional, artificial agents for 3 decades [CON16].


Acknowledgments

Creative Commons LicenseThanks to several expert reviewers for useful comments. Since science is about self-correction, let me know under juergen@idsia.ch if you can spot any remaining error. The contents of this article may be used for educational and non-commercial purposes, including articles for Wikipedia and similar sites. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


References

[ATT0] J. Schmidhuber and R. Huber. Learning to generate focus trajectories for attentive vision. Technical Report FKI-128-90, Institut für Informatik, Technische Universität München, 1990. PDF.

[ATT1] J. Schmidhuber and R. Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(1 & 2):135-141, 1991. Based on TR FKI-128-90, TUM, 1990. PDF. More.

[ATT2] J.  Schmidhuber. Learning algorithms for networks with internal and external feedback. In D. S. Touretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton, editors, Proc. of the 1990 Connectionist Models Summer School, pages 52-61. San Mateo, CA: Morgan Kaufmann, 1990. PS. (PDF.)

[S59] A. L. Samuel. Some studies in machine learning using the game of checkers. IBM Journal on Research and Development, 3:210-229, 1959.

[H90] W. D. Hillis. Co-evolving parasites improve simulated evolution as an optimization procedure. Physica D: Nonlinear Phenomena, 42(1-3):228-234, 1990.

[WER87] P. J. Werbos. Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research. IEEE Transactions on Systems, Man, and Cybernetics, 17, 1987.

[WER89] P. J. Werbos. Backpropagation and neurocontrol: A review and prospectus. In IEEE/INNS International Joint Conference on Neural Networks, Washington, D.C., volume 1, pages 209-216, 1989.

[MUN87] P. W. Munro. A dual back-propagation scheme for scalar reinforcement learning. Proceedings of the Ninth Annual Conference of the Cognitive Science Society, Seattle, WA, pages 165-176, 1987.

[NGU89] D. Nguyen and B. Widrow; The truck backer-upper: An example of self learning in neural networks. In IEEE/INNS International Joint Conference on Neural Networks, Washington, D.C., volume 1, pages 357-364, 1989.

[JOR90] M. I. Jordan and D. E. Rumelhart. Supervised learning with a distal teacher. Technical Report, Massachusetts Institute of Technology, 1990.

[GAN0] O. Niemitalo. A method for training artificial neural networks to generate missing data within a variable context. Blog post, Internet Archive, 2010

[GAN1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio. Generative adversarial nets. NIPS 2014, 2672-2680, Dec 2014.

[PHD] J.  Schmidhuber. Dynamische neuronale Netze und das fundamentale raumzeitliche Lernproblem (Dynamic neural nets and the fundamental spatio-temporal credit assignment problem). Dissertation, Institut für Informatik, Technische Universität München, 1990. PDF. HTML.

[PLAN2] J.  Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive environments. In Proc. IEEE/INNS International Joint Conference on Neural Networks, San Diego, volume 2, pages 253-258, June 17-21, 1990. Based on [AC90].

[PLAN3] J.  Schmidhuber. Reinforcement learning in Markovian and non-Markovian environments. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, NIPS'3, pages 500-506. San Mateo, CA: Morgan Kaufmann, 1991. PDF. Partially based on [AC90].

[PLAN4] J. Schmidhuber. On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models. Report arXiv:1210.0118 [cs.AI], 2015.

[PLAN5] One Big Net For Everything. Preprint arXiv:1802.08864 [cs.AI], Feb 2018.

[PLAN6] D. Ha, J. Schmidhuber. Recurrent World Models Facilitate Policy Evolution. Advances in Neural Information Processing Systems (NIPS), Montreal, 2018. (Talk.) Preprint: arXiv:1809.01999. Github: World Models.

[OBJ1] K. Greff, A. Rasmus, M. Berglund, T. Hao, H. Valpola, J. Schmidhuber (2016). Tagger: Deep unsupervised perceptual grouping. NIPS 2016, pp. 4484-4492.

[OBJ2] K. Greff, S. van Steenkiste, J. Schmidhuber (2017). Neural expectation maximization. NIPS 2017, pp. 6691-6701.

[OBJ3] S. van Steenkiste, M. Chang, K. Greff, J. Schmidhuber (2018). Relational neural expectation maximization: Unsupervised discovery of objects and their interactions. ICLR 2018.

[OBJ4] A. Stanic, S. van Steenkiste, J. Schmidhuber (2021). Hierarchical Relational Inference. AAAI 2021.

[OBJ5] A. Gopalakrishnan, S. van Steenkiste, J. Schmidhuber (2020). Unsupervised Object Keypoint Learning using Local Spatial Predictability. Preprint arXiv/2011.12930.

[AC90] J.  Schmidhuber. Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. Technical Report FKI-126-90, TUM, Feb 1990, revised Nov 1990. PDF

[AC90b] J.  Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In J. A. Meyer and S. W. Wilson, editors, Proc. of the International Conference on Simulation of Adaptive Behavior: From Animals to Animats, pages 222-227. MIT Press/Bradford Books, 1991. PDF. HTML.

[AC91] J. Schmidhuber. Adaptive confidence and adaptive curiosity. Technical Report FKI-149-91, Inst. f. Informatik, Tech. Univ. Munich, April 1991. PDF.

[AC91b] J.  Schmidhuber. Curious model-building control systems. In Proc. International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458-1463. IEEE, 1991. PDF.

[AC95] J. Storck, S. Hochreiter, and J. Schmidhuber. Reinforcement-driven information acquisition in non-deterministic environments. In Proc. ICANN'95, vol. 2, pages 159-164. EC2 & CIE, Paris, 1995. PDF.

[AC97] J. Schmidhuber. What's interesting? Technical Report IDSIA-35-97, IDSIA, July 1997.

[AC99] J. Schmidhuber. Artificial Curiosity Based on Discovering Novel Algorithmic Predictability Through Coevolution. In P. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, Z. Zalzala, eds., Congress on Evolutionary Computation, p. 1612-1618, IEEE Press, Piscataway, NJ, 1999.

[AC02] J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002. PDF.

[AC06] J.  Schmidhuber. Developmental Robotics, Optimal Artificial Curiosity, Creativity, Music, and the Fine Arts. Connection Science, 18(2): 173-187, 2006. PDF.

[AC09] J. Schmidhuber. Art & science as by-products of the search for novel patterns, or data compressible in unknown yet learnable ways. In M. Botta (ed.), Et al. Edizioni, 2009, pp. 98-112. PDF. (More on artificial scientists and artists.)

[AC10] J. Schmidhuber. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230-247, 2010. IEEE link. PDF.

[AC20] J. Schmidhuber. Generative Adversarial Networks are Special Cases of Artificial Curiosity (1990) and also Closely Related to Predictability Minimization (1991). Neural Networks, Volume 127, p 58-66, 2020. Preprint arXiv/1906.04493.

[R2] Reddit/ML, 2019. J. Schmidhuber really had GANs in 1990.

[PP] J. Schmidhuber. POWERPLAY: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem. Frontiers in Cognitive Science, 2013. ArXiv preprint (2011): arXiv:1112.5309 [cs.AI]

[PP1] R. K. Srivastava, B. Steunebrink, J. Schmidhuber. First Experiments with PowerPlay. Neural Networks, 2013. ArXiv preprint (2012): arXiv:1210.8385 [cs.AI].

[PP2] V. Kompella, M. Stollenga, M. Luciw, J. Schmidhuber. Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artificial Intelligence, 2015.

[RES5] Gloye, A., Wiesel, F., Tenchio, O., Simon, M. Reinforcing the Driving Quality of Soccer Playing Robots by Anticipation, IT - Information Technology, vol. 47, nr. 5, Oldenbourg Wissenschaftsverlag, 2005. PDF.

[RES7] J. Schmidhuber: Prototype resilient, self-modeling robots. Correspondence, Science, 316, no. 5825 p 688, May 2007.

[PM1] J. Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879, 1992. PDF. More.

[PM2] J. Schmidhuber, M. Eldracher, B. Foltin. Semilinear predictability minimzation produces well-known feature detectors. Neural Computation, 8(4):773-786, 1996. PDF. More.

[TD] R. Sutton. Learning to predict by the methods of temporal differences. Machine Learning. 3 (1): 9-44, 1988.

[DYNA90] R. S. Sutton (1990). Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming. Machine Learning Proceedings 1990, of the Seventh International Conference, Austin, Texas, June 21-23, 1990, p 216-224.

[DYNA91] R. S. Sutton (1991). Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bulletin 2.4 (1991):160-163.

[GVF] R. Sutton, J. Modayil, M. Delp, T. De-gris, P. M. Pilarski, A. White, AD. Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp.761-768, 2011.

[PG] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning 8.3-4: 229-256, 1992.

[DPG] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, M. Riedmiller. Deterministic policy gradient algorithms. Proceedings of ICML'31, Beijing, China, 2014. JMLR: W&CP volume 32.

[DDPG] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, D. Wierstra. Continuous control with deep reinforcement learning. Preprint arXiv:1509.02971, 2015.

[NAN1] J.  Schmidhuber. Networks adjusting networks. In J. Kindermann and A. Linden, editors, Proceedings of `Distributed Adaptive Neural Information Processing', St.Augustin, 24.-25.5. 1989, pages 197-208. Oldenbourg, 1990. Extended version: TR FKI-125-90 (revised), Institut für Informatik, TUM. PDF.

[NAN2] J.  Schmidhuber. Networks adjusting networks. Technical Report FKI-125-90, Institut für Informatik, Technische Universität München. Revised in November 1990. PDF.

[NAN3] Recurrent networks adjusted by adaptive critics. In Proc. IEEE/INNS International Joint Conference on Neural Networks, Washington, D. C., volume 1, pages 719-722, 1990.

[NAN4] J. Schmidhuber. Additional remarks on G. Lukes' review of Schmidhuber's paper `Recurrent networks adjusted by adaptive critics'. Neural Network Reviews, 4(1):43, 1990.

[NAN5] M. Jaderberg, W. M. Czarnecki, S. Osindero, O. Vinyals, A. Graves, D. Silver, K. Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. Preprint arXiv:1608.05343, 2016.

[UN0] J.  Schmidhuber. Neural sequence chunkers. Technical Report FKI-148-91, Institut für Informatik, Technische Universität München, April 1991. PDF.

[UN1] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992. Based on TR FKI-148-91, TUM, 1991 [UN0]. PDF. [First working Deep Learner based on a deep RNN hierarchy (with different self-organising time scales), overcoming the vanishing gradient problem through unsupervised pre-training and predictive coding. Also: compressing or distilling a teacher net (the chunker) into a student net (the automatizer) that does not forget its old skills—such approaches are now widely used. More.]

[UN2] J. Schmidhuber. Habilitation thesis, TUM, 1993. PDF. [An ancient experiment on "Very Deep Learning" with credit assignment across 1200 time steps or virtual layers and unsupervised pre-training for a stack of recurrent NN can be found here (depth > 1000).]

[UN3] J.  Schmidhuber, M. C. Mozer, and D. Prelinger. Continuous history compression. In H. Hüning, S. Neuhauser, M. Raus, and W. Ritschel, editors, Proc. of Intl. Workshop on Neural Networks, RWTH Aachen, pages 87-95. Augustinus, 1993.

[SNT] J. Schmidhuber, S. Heil (1996). Sequential neural text compression. IEEE Trans. Neural Networks, 1996. PDF. (An earlier version appeared at NIPS 1995.)

[CATCH] J. Schmidhuber. Philosophers & Futurists, Catch Up! Response to The Singularity. Journal of Consciousness Studies, Volume 19, Numbers 1-2, pp. 173-182(10), 2012. PDF.

[CON16] J. Carmichael (2016). Artificial Intelligence Gained Consciousness in 1991. Why A.I. pioneer Jürgen Schmidhuber is convinced the ultimate breakthrough already happened. Inverse, 2016. Link.

[DL1] J. Schmidhuber, 2015. Deep Learning in neural networks: An overview. Neural Networks, 61, 85-117. More.

[DL2] J. Schmidhuber, 2015. Deep Learning. Scholarpedia, 10(11):32832.

[DL4] J. Schmidhuber, 2017. Our impact on the world's most valuable public companies: 1. Apple, 2. Alphabet (Google), 3. Microsoft, 4. Facebook, 5. Amazon ... HTML.

[T20] J. Schmidhuber (June 2020). Critique of 2018 Turing Award.

[MIR] J. Schmidhuber (10/4/2019). Deep Learning: Our Miraculous Year 1990-1991. See also arxiv:2005.05744 (May 2020).

[DEC] J. Schmidhuber (02/20/2020). The 2010s: Our Decade of Deep Learning / Outlook on the 2020s.

[ATT] J. Schmidhuber (2020). End-to-End Differentiable Sequential Neural Attention 1990-93.

[META] J. Schmidhuber, 2020. Survey: Metalearning Machines Learn to Learn (1987-)

.

Our impact on the world's most valuable public companies (Google, Apple, Microsoft, Facebook, mazon etc)