Sound source separation using null-beamforming and spectral subtraction for mobile devices

Shintaro Takada*, Satoshi Kanba, Tetsuji Ogawa, Kenzo Akagiri, Tetsunori Kobayashi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Citations (Scopus)

Abstract

This paper presents a new type of speech segregation method for mobile devices in noisy sound situation, where two or more speakers are talking simultaneously. The proposed method consists of multiple null-beamformers, their minimum power channel selection and spectral subtraction. The proposed method is performed with space-saving and coplanar microphone arrangements and low-cost calculations, which are the very important requirements for the mobile application. Effectiveness of the proposed method is clarified in the segregation and the recognition experiments of two simultaneous continuous speeches: the method improved the PESQ-based MOS value by about one point and reduced 70% of word recognition errors compared with non-processing.

Original languageEnglish
Title of host publication2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA
Pages30-33
Number of pages4
DOIs
Publication statusPublished - 2007 Dec 1
Event2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA - New Paltz, NY, United States
Duration: 2007 Oct 212007 Oct 24

Publication series

NameIEEE Workshop on Applications of Signal Processing to Audio and Acoustics

Conference

Conference2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA
Country/TerritoryUnited States
CityNew Paltz, NY
Period07/10/2107/10/24

ASJC Scopus subject areas

  • Signal Processing

Fingerprint

Dive into the research topics of 'Sound source separation using null-beamforming and spectral subtraction for mobile devices'. Together they form a unique fingerprint.

Cite this