Dropout (artificial neural network)
Dropout is a regularization method that is used in artificial neural networks and reduces the risk of overfitting .
When training the network, a previously specified number (around 30%) of neurons in each layer of the network is switched off (“dropout”) and is not taken into account for the next calculation step. This is a very effective way of training deep neural networks.
See also
literature
- Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov: Dropout. A Simple Way to Prevent Neural Networks from Overfitting . In: Journal of Machine Learning Research . tape 15 , no. 2 . Department of Computer Science University of Toronto, 2014, ISSN 1532-4435 , OCLC 5973067678 , p. 1929–1958 ( cs.toronto.edu [PDF; accessed November 17, 2016]).
- Nitish Srivastava: Improving Neural Networks with Dropout . Department of Computer Science University of Toronto, 2013 ( cs.utoronto.ca [PDF; accessed November 17, 2016]).
Web links
- Hinton's Dropout in 3 Lines of Python. iamtrask.github.io, July 28, 2015, accessed November 17, 2016 .
- Data Science 101: Preventing Overfitting in Neural Networks. In: kdnuggets.com. April 2015, accessed November 17, 2016 .
Individual evidence
- ^ Dropout: A Simple Way to Prevent Neural Networks from Overfitting . Retrieved July 26, 2015.
- ↑ Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan R. Salakhutdinov: Improving neural networks by preventing co-adaptation of feature detectors . July 3, 2012, OCLC 798541308 , arxiv : 1207.0580 .