Statistical inference: Learning in artificial neural networks

Howard Hua Yang*, Noboru Murata, Shun Ichi Amari

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

8 Citations (Scopus)


Artificial neural networks (ANNs) are widely used to model low-level neural activities and high-level cognitive functions. In this article, we review the applications of statistical inference for learning in ANNs. Statistical inference provides an objective way to derive learning algorithms both for training and for evaluation of the performance of trained ANNs. Solutions to the over-fitting problem by model-selection methods, based on either conventional statistical approaches or on a Bayesian approach, are discussed. The use of supervised and unsupervised learning algorithms for ANNs are reviewed. Training a multilayer ANN by supervised learning is equivalent to nonlinear regression. The ensemble methods, bagging and arching, described here, can be applied to combine ANNs to form a new predictor with improved performance. Unsupervised learning algorithms that are derived either by the Hebbian law for bottom-up self-organization, or by global objective functions for top-down self-organization are also discussed.

Original languageEnglish
Pages (from-to)4-10
Number of pages7
JournalTrends in Cognitive Sciences
Issue number1
Publication statusPublished - 1998 Jan 1
Externally publishedYes

ASJC Scopus subject areas

  • Neuropsychology and Physiological Psychology
  • Experimental and Cognitive Psychology
  • Cognitive Neuroscience


Dive into the research topics of 'Statistical inference: Learning in artificial neural networks'. Together they form a unique fingerprint.

Cite this