Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings

Yasuo Matsuyama*, Shuichiro Imahara, Naoto Katsumata

*この研究の対応する著者

    研究成果: Conference contribution

    3 被引用数 (Scopus)

    抄録

    Likelihood optimization methods for learning algorithms are generalized and faster algorithms are provided. The idea is to transfer the optimization to a general class of convex divergences between two probability density functions. The first part explains why such optimization transfer is significant. The second part contains derivation of the generalized ICA (Independent Component Analysis). Experiments on brain fMRI maps are reported. The third part discusses this optimization transfer in the generalized EM algorithm (Expectation-Maximization). Hierarchical descendants to this algorithm such as vector quantization and self-organization are also explained.

    本文言語English
    ホスト出版物のタイトルProceedings of the International Joint Conference on Neural Networks
    ページ1883-1888
    ページ数6
    2
    出版ステータスPublished - 2002
    イベント2002 International Joint Conference on Neural Networks (IJCNN '02) - Honolulu, HI
    継続期間: 2002 5月 122002 5月 17

    Other

    Other2002 International Joint Conference on Neural Networks (IJCNN '02)
    CityHonolulu, HI
    Period02/5/1202/5/17

    ASJC Scopus subject areas

    • ソフトウェア

    フィンガープリント

    「Optimization transfer for computational learning: A hierarchy from f-ICA and alpha-EM to their offsprings」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

    引用スタイル