Speakers

  • Paul Werbos
  • Paul J. Werbos is an IEEE Fellow and an early recipient of the IEEE Neural Network Pioneer Award, recognizing the original development of backpropagation and of adaptive dynamic programming in 1960’s and 1970s. INNS gave him its highest award, the Hebb award, for showing how these mathematical tools can explain key aspects of learning in biological brains. His Ph.D. is from Harvard (1974), Masters’ from the London School of Economics (1968) and Harvard (1969). In high school, he took undergraduate and graduate courses in mathematics from University of Pennsylvania and Princeton, and obtained FCC First Class Commercial Radiotelephone license. He has published on issues of consciousness, the foundations of physics, and human potential. From 1988 to 2015, he led the neural network research funded by the Engineering Directorate of NSF, and other research areas. As part of his IEEEUSA activity, he gave a major talk in Rayburn to over 200 Congressional staffers, which helped prepare the State of the Union message which led to the Energy Information and Security Act of 2007. In 2009, he served as Brookings Fellow for Senator Specter, responsible for climate, energy and space policy.
  • Li Deng
  • Li Deng received a Ph.D. from the University of Wisconsin-Madison. He was an assistant and then tenured full professor at the University of Waterloo, Ontario, Canada during 1989-1999. He then joined Microsoft Research, Redmond, USA, where he currently directs the R&D of its Deep Learning Technology Center founded in early 2014 and also serves as a Chief Scientist of AI. Dr. Deng is a Fellow of the IEEE, the Acoustical Society of America, and the ISCA. He served on the Board of Governors of the IEEE Signal Processing Society. More recently, he was the Editor-In-Chief for the IEEE Signal Processing Magazine and for the IEEE/ACM Transactions on Audio, Speech, and Language Processing. Dr. Deng's technical work on industry-scale deep learning and AI has been recognized by several awards, including the 2013 IEEE Signal Processing Society's Best Paper Award and the 2015 IEEE SPS Technical Achievement Award "for outstanding contributions to deep learning and to automatic speech recognition".
  • Risto Miikkulainen
  • Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and a Fellow at Sentient Technologies, Inc. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods and applications of evolutionary optimizaiton of neural networks, as well as neural network models of natural language processing and self-organization of the visual cortex; he is an author of over 370 articles in these research areas. He is an IEEE Fellow, member of the Board of Governors of the Neural Network Society, and an action editor of Cognitive Systems Research and IEEE Transactions on Computational Intelligence and AI in Games.
  • Jason Weston
  • Jason Weston is a research scientist at Facebook, NY, since Feb 2014. He earned his PhD in machine learning at Royal Holloway, University of London and at AT&T Research in Red Bank, NJ (advisors: Alex Gammerman, Volodya Vovk and Vladimir Vapnik) in 2000. From 2000 to 2002, he was a researcher at Biowulf technologies, New York. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2003 to 2009 he was a research staff member at NEC Labs America, Princeton. From 2009 to 2014 he was a research scientist at Google, NY. His interests lie in statistical machine learning and its application to text, audio and images. Jason has published over 100 papers, including best paper awards at ICML and ECML. He was part of the YouTube team that won a National Academy of Television Arts & Sciences Emmy Award for Technology and Engineering for Personalized Recommendation Engines for Video Discovery. He was listed as one of the top 50 authors in Computer Science in Science.
  • Oriol Vinyals
  • Oriol Vinyals is a Staff Research Scientist at Google DeepMind, working in Deep Learning. Prior to joining DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from University of California, Berkeley and is a recipient of the 2016 MIT TR35 innovator award. At DeepMind he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, language, and vision.
  • Mike Mozer
  • Michael Mozer received a Ph.D. in Cognitive Science at the University of California at San Diego in 1987. Following a postdoctoral fellowship with Geoffrey Hinton at the University of Toronto, he joined the faculty at the University of Colorado at Boulder and is presently an Professor in the Department of Computer Science and the Institute of Cognitive Science. He is secretary of the Neural Information Processing Systems Foundation and has served as chair of the Cognitive Science Society. He is interested both in developing machine learning algorithms that leverage insights from human cognition, and in developing software tools to optimize human performance using machine learning methods.
  • Ilya Sutskever
  • Ilya Sutskever received his Ph.D. in computer science from the University of Toronto, under the supervision of Geoffrey Hinton. He was a postdoctoral fellow with Andrew Ng at Stanford University for a brief period, after which he dropped out to co-found DNNResearch which Google acquired the following year. Sutskever joined the Google Brain team as a research scientist, where he developed the Sequence to Sequence model, contributed to the design of TensorFlow, and helped establish the Brain Residency Program. He is a co-founder of OpenAI, where he currently serves as research director. Sutskever has made many contributions to the field of Deep Learning, including the convolutional neural network that convinced the computer vision community of the power of deep learning by winning the 2012 ImageNet competition. He was listed in MIT Technology Review’s 35 innovators under 35.
  • Marcus Hutter
  • Professor in the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra. Before he was with IDSIA in Switzerland and NICTA. Pioneering contributions to Universal Artificial Intelligence, a mathematical top-down approach to AI, based on Kolmogorov complexity, algorithmic probability, universal Solomonoff induction, Occam's razor, Levin search, sequential decision theory, dynamic programming, reinforcement learning, and rational agents. Generally attracted by fundamental problems on the boundary between Science and Philosophy, which have a chance of being solved in his expected lifespan. One is Artificial Intelligence. The other is Particle Physics with questions related to physical Theories of Everything.
  • Nando de Freitas
  • Nando is a machine learning professor at Oxford University, a lead research scientist at Google DeepMind, and a Fellow of the Canadian Institute For Advanced Research (CIFAR). He received his PhD from Trinity College, Cambridge University in 2000 on Bayesian methods for neural networks. From 1999 to 2001, he was postdoc at UC Berkeley in the AI group of Stuart Russell. He was a professor at the University of British Columbia from 2001 to 2014. He has spun off a few companies, most recently Dark Blue Labs acquired by Google. Among his recent awards are best paper awards at IJCAI 2013, ICLR 2016, ICML 2016, and the Yelp Dataset award for a multi-instance transfer learning paper at KDD 2015. He also received the 2012 Charles A. McDowell Award for Excellence in Research, and the 2010 Mathematics of Information Technology and Complex Systems Young Researcher Award.
  • Alex Graves
  • Main contributions to neural networks include the Connectionist Temporal Classification training algorithm (widely used for speech, handwriting and gesture recognition, e.g. by Google voice search), a type of differentiable attention for RNNs (originally for handwriting generation, now a standard tool in computer vision, machine translation and elsewhere), stochastic gradient variational inference, and Neural Turing Machines. He works at Google DeepMind.
  • Nal Kalchbrenner
  • Nal is a Senior Scientist at Google DeepMind. Nal studied deep learning at Oxford University where he worked on one of the first sequence-to-sequence neural networks for machine translation. Nal received an ICML Best Paper award for his work on the PixelRNN model of images and extended this work to other modalities such as sound with the WaveNet model. Nal was also part of the team that created AlphaGo.