Geoffrey Hinton

from Wikipedia, the free encyclopedia
Geoffrey Hinton

Geoffrey E. Hinton , CC FRS FRSC (born December 6, 1947 in Wimbledon , Great Britain ) is a British computer scientist and cognitive psychologist who is best known for his contributions to the theory of artificial neural networks .

Life and education

Geoffrey Hinton was born as the son of the entomologist Howard Hinton (1912-1977) and the great-great-grandson of the logician George Boole . He grew up as an atheist in a Christian school. Driven by his plan to understand the human mind, he studied experimental psychology at the University of Cambridge ( England ) from 1967 to 1970 , but in the meantime he switched to physiology and philosophy due to dissatisfaction with the course content . Disappointed in these disciplines too, he finally finished his studies with a degree in psychology. It was only as a doctoral student that his studies of the then unpopular neural networks were tolerated by his supervisors. Hinton was firmly convinced that neural systems are sufficient and necessary for the explanation and reproduction of intelligence. In 1978 he received his PhD in Artificial Intelligence from the University of Edinburgh ( Scotland ). After stays at the University of Sussex (England), the University of California, San Diego ( USA ) and Carnegie-Mellon University ( Pittsburgh , USA), he became Professor in the Computer Science Department of the University of Toronto ( Canada ) in 1987 . From 1998 to 2001 the Gatsby Computational Neuroscience Unit was established under his leadership at University College London , and since then he has continued to work as a professor at the University of Toronto. Since March 2013, Hinton has been working at Google alongside his work at the University of Toronto.


Geoffrey Hinton investigates the application of artificial neural networks in the areas of learning , memory , perception and symbol processing . He was among the researchers who propagation - algorithm introduced (in a Nature paper by 1986 with David Rumelhart and Ronald Williams) and developed among others, the concepts of the Boltzmann machine and the Helmholtz machine . Easily understandable introductions to his scientific work can be found in his articles in Scientific American from 1992 and 1993.

Honors and memberships

In 2001 he received the first Rumelhart Prize for “theoretical contributions to the fundamentals of human knowledge” and in 2005 the IJCAI Award for Research Excellence . He was in the 1996 Royal Society of Canada in 1998 in the Royal Society and in 2003 into the American Academy of Arts and Sciences added. In 2016 he was elected to the National Academy of Engineering . For 2016 he received the BBVA Foundation Frontiers of Knowledge Award , for 2018 the Turing Award .


  • How neural networks learn from experience . In: Scientific American . 9/1992
  • with DC Plaut and T. Shallice: Simulating brain damage . In: Scientific American . 10/1993

Individual evidence

  2. Archived copy ( memento of the original from June 20, 2015 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot /
  4. Wired: Google Hires Brains that Helped Supercharge Machine Learning
  5. The Verge , March 27, 2019
  6. Stefan Betschon: Honor for the “Deep Learning Mafia”. Neue Zürcher Zeitung, April 4, 2019, accessed on April 12, 2019.

Web links