Utterance Clustering Using Stereo Audio Channels

Yingjun Dong*, Neil G. MacLaren, Yiding Cao, Francis J. Yammarino, Shelley D. Dionne, Michael D. Mumford, Shane Connelly, Hiroki Sayama, Gregory A. Ruark

*この研究の対応する著者

研究成果: Article査読

抄録

Utterance clustering is one of the actively researched topics in audio signal processing and machine learning. This study aims to improve the performance of utterance clustering by processing multichannel (stereo) audio signals. Processed audio signals were generated by combining left- A nd right-channel audio signals in a few different ways and then by extracting the embedded features (also called d-vectors) from those processed audio signals. This study applied the Gaussian mixture model for supervised utterance clustering. In the training phase, a parameter-sharing Gaussian mixture model was obtained to train the model for each speaker. In the testing phase, the speaker with the maximum likelihood was selected as the detected speaker. Results of experiments with real audio recordings of multiperson discussion sessions showed that the proposed method that used multichannel audio signals achieved significantly better performance than a conventional method with mono-audio signals in more complicated conditions.

本文言語English
論文番号6151651
ジャーナルComputational Intelligence and Neuroscience
2021
DOI
出版ステータスPublished - 2021
外部発表はい

ASJC Scopus subject areas

  • コンピュータ サイエンス(全般)
  • 神経科学(全般)
  • 数学 (全般)

フィンガープリント

「Utterance Clustering Using Stereo Audio Channels」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル