抄録
The universal asymptotic scaling laws proposed by Amari et al. are studied in large scale simulations using a CM5. Small stochastic multilayer feedforward networks trained with backpropagation are investigated. In the range of a large number of training patterns t, the asymptotic generalization error scales as 1/t as predicted. For a medium range t a faster 1/t2 scaling is observed. This effect is explained by using higher order corrections of the likelihood expansion. It is shown for small t that the scaling law changes drastically, when the network undergoes a transition from strong overfitting to effective learning.
本文言語 | English |
---|---|
ページ(範囲) | 1085-1106 |
ページ数 | 22 |
ジャーナル | Neural Computation |
巻 | 8 |
号 | 5 |
DOI | |
出版ステータス | Published - 1996 7月 1 |
外部発表 | はい |
ASJC Scopus subject areas
- 人文科学(その他)
- 認知神経科学