抄録
A class of extended logarithms is used to derive α-weighted EM (α-weighted Expectation and Maximization) algorithms. These extended EM algorithms (WEM's, α-EM's) have been anticipated to outperform the traditional (logarithmic) EM algorithm on the speed. The traditional approach falls into a special case of the new WEM. In this paper, general theoretical discussions are given first. Then, clear-cut evidences that show faster convergence than the ordinary EM approach are given on the case of mixture-of-expert neural networks. This process takes three steps. The first step is to show concrete algorithms. Then, the convergence is theoretically checked. Thirdly, experiments on the mixture-of-expert learning are tried to show the superiority of the WEM. Besides the supervised learning, unsupervised case on a Gaussian mixture is also examined. Faster convergence of the WEM is observed again.
本文言語 | English |
---|---|
ホスト出版物のタイトル | IEEE International Conference on Neural Networks - Conference Proceedings |
編集者 | Anon |
Place of Publication | Piscataway, NJ, United States |
出版社 | IEEE |
ページ | 2306-2311 |
ページ数 | 6 |
巻 | 3 |
出版ステータス | Published - 1998 |
イベント | Proceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3) - Anchorage, AK, USA 継続期間: 1998 5月 4 → 1998 5月 9 |
Other
Other | Proceedings of the 1998 IEEE International Joint Conference on Neural Networks. Part 1 (of 3) |
---|---|
City | Anchorage, AK, USA |
Period | 98/5/4 → 98/5/9 |
ASJC Scopus subject areas
- ソフトウェア