抄録
Utterance clustering is one of the actively researched topics in audio signal processing and machine learning. This study aims to improve the performance of utterance clustering by processing multichannel (stereo) audio signals. Processed audio signals were generated by combining left- A nd right-channel audio signals in a few different ways and then by extracting the embedded features (also called d-vectors) from those processed audio signals. This study applied the Gaussian mixture model for supervised utterance clustering. In the training phase, a parameter-sharing Gaussian mixture model was obtained to train the model for each speaker. In the testing phase, the speaker with the maximum likelihood was selected as the detected speaker. Results of experiments with real audio recordings of multiperson discussion sessions showed that the proposed method that used multichannel audio signals achieved significantly better performance than a conventional method with mono-audio signals in more complicated conditions.
本文言語 | English |
---|---|
論文番号 | 6151651 |
ジャーナル | Computational Intelligence and Neuroscience |
巻 | 2021 |
DOI | |
出版ステータス | Published - 2021 |
外部発表 | はい |
ASJC Scopus subject areas
- コンピュータ サイエンス(全般)
- 神経科学(全般)
- 数学 (全般)