.
2005: First paper with

Jürgen Schmidhuber (October 2020)
Pronounce: You_again Shmidhoobuh
AI Blog
Twitter: @SchmidhuberAI


15-year anniversary: 1st paper with "learn deep" in the title (2005)

In 2020, we celebrate the 15-year anniversary of the first machine learning paper with the word combination "learn deep" in the title (2005) [DL6]. It showed how deep reinforcement learning (RL) without a teacher can solve problems of depth 1000 [DL1] and more. Soon after its publication, everybody started talking about "deep learning." Causality or correlation? Anyhow, it must be mentioned that the ancient term "deep learning" was introduced to the field of Machine Learning much earlier [DL2] by Rina Dechter in 1986 [Dec86], and to Artificial Neural Networks (NNs) by Aizenberg et al. in 2000 [Aiz00]. That is, in 2020, we are also celebrating the 20-year anniversary of the latter! More on this in Sec. X of [T20].

The work of 2005 [DL6] was driven by my former senior researcher Faustino Gomez, now CEO of NNAISENSE. It was about deep RL with recurrent neural networks and neuroevolution. An algorithm called Hierarchical Enforced SubPopulations was used to simultaneously evolve NNs at two levels of granularity: full networks and network components or neurons. In partially observable environments, the method was applied to tasks that involve temporal dependencies of up to thousands of time-steps. It outperformed the best conventional RL systems.

We had many additional papers on these topics. See, e.g., the overview pages on reinforcement learning (since 1989), artificial evolution (since 1987), co-evolving recurrent neurons (since 2005), compressed network search (since 2013), Evolino (since 2005), genetic programming (since 1987). Even more papers on this can be found in my publication page. See also Sec. 5 of [DEC] and Sec. 8 of [MIR].


Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


References

[DL6] F. Gomez and J. Schmidhuber. Co-evolving recurrent neurons learn deep memory POMDPs. In Proc. GECCO'05, Washington, D. C., pp. 1795-1802, ACM Press, New York, NY, USA, 2005. PDF.

[DL1] J. Schmidhuber, 2015. Deep Learning in neural networks: An overview. Neural Networks, 61, 85-117. More.

[DL2] J. Schmidhuber, 2015. Deep Learning. Scholarpedia, 10(11):32832.

[Dec86] R. Dechter (1986). Learning while searching in constraint-satisfaction problems. University of California, Computer Science Department, Cognitive Systems Laboratory. [First paper to introduce the term "Deep Learning" to Machine Learning.]

[Aiz00] I. Aizenberg, N.N. Aizenberg, and J. P.L. Vandewalle (2000). Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer Science & Business Media. [First work to introduce the term "Deep Learning" to Neural Networks.]

[T20] J. Schmidhuber (June 2020). Critique of 2018 Turing Award. Link.

[MIR] J. Schmidhuber (10/4/2019). Deep Learning: Our Miraculous Year 1990-1991. See also arxiv:2005.05744 (May 2020).

[DEC] J. Schmidhuber (02/20/2020). The 2010s: Our Decade of Deep Learning / Outlook on the 2020s.

.

The 2010s: Our Decade of Deep Learning (Juergen Schmidhuber)