Discrimination of speech, musical instruments and singing voices using the temporal patterns of sinusoidal segments in audio signals

Toru Taniguchi*, Akishige Adachi, Shigeki Okawa, Masaaki Honda, Katsuhiko Shirai

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

Abstract

We developed a method for discriminating speech, musical instruments and singing voices based on sinusoidal decomposition of audio signals. Although many studies have been conducted, few have worked on the problem of the temporal overlapping of the categories of sounds. In order to cope with such problems, we used sinusoidal segments with variable lengths as the discrimination units, although most of traditional work has used fixed-length units. The discrimination is based on the temporal characteristics of the sinusoidal segments. We achieved an average discrimination rate of 71.56% in classifying sinusoidal segments in non-mixed audio data. In the time segments, the accuracy 87.9% in non-mixed-category audio data and 66.4% in 2-mixed-category are achieved. In the comparison of the proposed and the MFCC methods, the effectiveness of temporal features and the importance of the use of both the spectral and temporal characteristics were proved.

Original languageEnglish
Pages589-592
Number of pages4
Publication statusPublished - 2005 Dec 1
Event9th European Conference on Speech Communication and Technology - Lisbon, Portugal
Duration: 2005 Sept 42005 Sept 8

Conference

Conference9th European Conference on Speech Communication and Technology
Country/TerritoryPortugal
CityLisbon
Period05/9/405/9/8

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint

Dive into the research topics of 'Discrimination of speech, musical instruments and singing voices using the temporal patterns of sinusoidal segments in audio signals'. Together they form a unique fingerprint.

Cite this