Abstract
We have developed a system for visualizing auditory awareness on the basis of sound source locations estimated using a depth sensor and microphone array. Previous studies on visualizing the acoustic environment viewed the level of sound pressures directly on the captured image, so the visualization was often based on a mixture of several sound sources. As a result, which targets to focus on was not intuitive. To help users selectively to find the targets and focus on the target analysis, we should extract the captured acoustic information and selectively propose it with the user demand. We have designed a three-layer visualization model for auditory awareness consisting of a sound source distribution layer, a sound location layer, and a sound saliency layer. The model extracts acoustic information by using the depth image and multi-directional sound sources captured with a depth sensor and microphone array. This model is used in the system we developed for visualizing auditory awareness.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Intelligent Robots and Systems |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1908-1913 |
Number of pages | 6 |
ISBN (Print) | 9781479969340 |
DOIs | |
Publication status | Published - 2014 Oct 31 |
Externally published | Yes |
Event | 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014 - Chicago Duration: 2014 Sept 14 → 2014 Sept 18 |
Other
Other | 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2014 |
---|---|
City | Chicago |
Period | 14/9/14 → 14/9/18 |
ASJC Scopus subject areas
- Control and Systems Engineering
- Software
- Computer Vision and Pattern Recognition
- Computer Science Applications