Asymptotic statistical theory of overtraining and cross-validation

Shun Ichi Amari*, Noboru Murata, Klaus Robert Müller, Michael Finke, Howard Hua Yang

*この研究の対応する著者

研究成果: Article査読

272 被引用数 (Scopus)

抄録

A statistical theory for overtraining is proposed. The analysis treats general realizable stochastic neural networks, trained with Kullback-Leibler divergence in the asymptotic case of a large number of training examples. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and cross-validation sets in order to obtain the optimum performance. Although cross-validated early stopping is useless in the asymptotic region, it surely decreases the generalization error in the nonasymptotic region. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings.

本文言語English
ページ(範囲)985-996
ページ数12
ジャーナルIEEE Transactions on Neural Networks
8
5
DOI
出版ステータスPublished - 1997
外部発表はい

ASJC Scopus subject areas

  • ソフトウェア
  • コンピュータ サイエンスの応用
  • コンピュータ ネットワークおよび通信
  • 人工知能

フィンガープリント

「Asymptotic statistical theory of overtraining and cross-validation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル