Abstract
Artificial neural networks (ANNs) are widely used to model low-level neural activities and high-level cognitive functions. In this article, we review the applications of statistical inference for learning in ANNs. Statistical inference provides an objective way to derive learning algorithms both for training and for evaluation of the performance of trained ANNs. Solutions to the over-fitting problem by model-selection methods, based on either conventional statistical approaches or on a Bayesian approach, are discussed. The use of supervised and unsupervised learning algorithms for ANNs are reviewed. Training a multilayer ANN by supervised learning is equivalent to nonlinear regression. The ensemble methods, bagging and arching, described here, can be applied to combine ANNs to form a new predictor with improved performance. Unsupervised learning algorithms that are derived either by the Hebbian law for bottom-up self-organization, or by global objective functions for top-down self-organization are also discussed.
Original language | English |
---|---|
Pages (from-to) | 4-10 |
Number of pages | 7 |
Journal | Trends in Cognitive Sciences |
Volume | 2 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1998 Jan 1 |
Externally published | Yes |
ASJC Scopus subject areas
- Neuropsychology and Physiological Psychology
- Experimental and Cognitive Psychology
- Cognitive Neuroscience