Sound source separation for robot audition using deep learning

Kuniaki Noda, Naoya Hashimoto, Kazuhiro Nakadai, Tetsuya Ogata

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    8 Citations (Scopus)

    Abstract

    Noise robust speech recognition is crucial for effective human-machine interaction in real-world environments. Sound source separation (SSS) is one of the most widely used approaches for addressing noise robust speech recognition by extracting a target speaker's speech signal while suppressing simultaneous unintended signals. However, conventional SSS algorithms, such as independent component analysis or nonlinear principal component analysis, are limited in modeling complex projections with scalability. Moreover, conventional systems required designing an independent subsystem for noise reduction (NR) in addition to the SSS. To overcome these issues, we propose a deep neural network (DNN) framework for modeling the separation function (SF) of an SSS system. By training a DNN to predict clean sound features of a target sound from corresponding multichannel deteriorated sound feature inputs, we enable the DNN to model the SF for extracting the target sound without prior knowledge regarding the acoustic properties of the surrounding environment. Moreover, the same DNN is trained to function simultaneously as a NR filter. Our proposed SSS system is evaluated using an isolated word recognition task and a large vocabulary continuous speech recognition task when either nondirectional or directional noise is accumulated in the target speech. Our evaluation results demonstrate that DNN performs noticeably better than the baseline approach, especially when directional noise is accumulated with a low signal-to-noise ratio.

    Original languageEnglish
    Title of host publicationIEEE-RAS International Conference on Humanoid Robots
    PublisherIEEE Computer Society
    Pages389-394
    Number of pages6
    Volume2015-December
    ISBN (Print)9781479968855
    DOIs
    Publication statusPublished - 2015 Dec 22
    Event15th IEEE RAS International Conference on Humanoid Robots, Humanoids 2015 - Seoul, Korea, Republic of
    Duration: 2015 Nov 32015 Nov 5

    Other

    Other15th IEEE RAS International Conference on Humanoid Robots, Humanoids 2015
    Country/TerritoryKorea, Republic of
    CitySeoul
    Period15/11/315/11/5

    Keywords

    • Feature extraction
    • Microphones
    • Neural networks
    • Robots
    • Speech
    • Speech recognition
    • Training

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Computer Vision and Pattern Recognition
    • Hardware and Architecture
    • Human-Computer Interaction
    • Electrical and Electronic Engineering

    Fingerprint

    Dive into the research topics of 'Sound source separation for robot audition using deep learning'. Together they form a unique fingerprint.

    Cite this