Abstract
The expectation and maximization algorithm (EM algorithm) is generalized so that the learning proceeds according to adjustable weights in terms of probability measures. The presented method, the weighted EM algorithm, or the α-EM algorithm includes the existing EM algorithm as a special case. It is further found that this learning structure can work systolically. It is also possible to add monitors to interact with lower systolic subsystems. This is made possible by attaching building blocks of the weighted (or plain) EM learning. Derivation of the whole algorithm is based on generalized divergences. In addition to the discussions on the learning, extensions of basic statistical properties such as Fisher's efficient score, his measure of information and Cramer-Rao's inequality are given. These appear in update equations of the generalized expectation learning. Experiments show that the presented generalized version contains cases that outperform traditional learning methods.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Neural Networks - Conference Proceedings |
Place of Publication | Piscataway, NJ, United States |
Publisher | IEEE |
Pages | 1936-1941 |
Number of pages | 6 |
Volume | 3 |
Publication status | Published - 1997 |
Event | Proceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4) - Houston, TX, USA Duration: 1997 Jun 9 → 1997 Jun 12 |
Other
Other | Proceedings of the 1997 IEEE International Conference on Neural Networks. Part 4 (of 4) |
---|---|
City | Houston, TX, USA |
Period | 97/6/9 → 97/6/12 |
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Artificial Intelligence