Design and implementation of robot audition system 'HARK' - Open source software for listening to three simultaneous speakers

Kazuhiro Nakadai*, Toru Takahashi, Hiroshi G. Okuno, Hirofumi Nakajima, Yuji Hasegawa, Hiroshi Tsujino

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

181 Citations (Scopus)

Abstract

This paper presents the design and implementation of the HARK robot audition software system consisting of sound source localization modules, sound source separation modules and automatic speech recognition modules of separated speech signals that works on any robot with any microphone configuration. Since a robot with ears may be deployed to various auditory environments, the robot audition system should provide an easy way to adapt to them. HARK provides a set of modules to cope with various auditory environments by using an open-sourced middleware, FlowDesigner, and reduces the overheads of data transfer between modules. HARK has been open-sourced since April 2008. The resulting implementation of HARK with MUSIC-based sound source localization, GSS-based sound source separation and Missing Feature Theory-based automatic speech recognition on Honda ASIMO, SIG2 and Robovie R2 attains recognizing three simultaneous utterances with the delay of 1.9 s at the word correct rate of 80-90% for three speakers.

Original languageEnglish
Pages (from-to)739-761
Number of pages23
JournalAdvanced Robotics
Volume24
Issue number5-6
DOIs
Publication statusPublished - 2010 Apr 14
Externally publishedYes

Keywords

  • Automatic speech recognition
  • Open source software
  • Robot audition
  • Sound source localization
  • Sound source separation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Human-Computer Interaction
  • Computer Science Applications
  • Hardware and Architecture
  • Software

Fingerprint

Dive into the research topics of 'Design and implementation of robot audition system 'HARK' - Open source software for listening to three simultaneous speakers'. Together they form a unique fingerprint.

Cite this