Sound source localization using deep learning models

Nelson Yalta, Kazuhiro Nakadai, Tetsuya Ogata

研究成果: Article査読

69 被引用数 (Scopus)

抄録

This study proposes the use of a deep neural network to localize a sound source using an array of microphones in a reverberant environment. During the last few years, applications based on deep neural networks have performed various tasks such as image classification or speech recognition to levels that exceed even human capabilities. In our study, we employ deep residual networks, which have recently shown remarkable performance in image classification tasks even when the training period is shorter than that of other models. Deep residual networks are used to process audio input similar to multiple signal classification (MUSIC) methods. We show that with end-to-end training and generic preprocessing, the performance of deep residual networks not only surpasses the block level accuracy of linear models on nearly clean environments but also shows robustness to challenging conditions by exploiting the time delay on power information.

本文言語English
ページ(範囲)37-48
ページ数12
ジャーナルJournal of Robotics and Mechatronics
29
1
DOI
出版ステータスPublished - 2017 2月

ASJC Scopus subject areas

  • コンピュータ サイエンス(全般)
  • 電子工学および電気工学

フィンガープリント

「Sound source localization using deep learning models」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル