Changing timbre and phrase in existing musical performances as you like - Manipulations of single part using harmonic and inharmonic models

Naoki Yasuraoka*, Takehiro Abe, Katsutoshi Itoyama, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

This paper presents a new music manipulation method that can change the timbre and phrases of an existing instrumental performance in a polyphonic sound mixture. This method consists of three primitive functions: 1) extracting and analyzing of a single instrumental part from polyphonic music signals, 2) mixing the instrument timbre with another, and 3) rendering a new phrase expression for another given score. The resulting customized part is re-mixed with the remaining parts of the original performance to generate new polyphonic music signals. A single instrumental part is extracted by using an integrated tone model that consists of harmonic and inharmonic tone models with the aid of the score of the single instrumental part. The extraction incorporates a residual model for the single instrumental part in order to avoid crosstalk between instrumental parts. The extracted model parameters are classified into their averages and deviations. The former is treated as instrument timbre and is customized by mixing, while the latter is treated as phrase expression and is customized by rendering. We evaluated our method in three experiments. The first experiment focused on introduction of the residual model, and it showed that the model parameters are estimated more accurately by 35.0 points. The second focused on timbral customization, and it showed that our method is more robust by 42.9 points in spectral distance compared with a conventional sound analysis-synthesis method, STRAIGHT. The third focused on the acoustic fidelity of customizing performance, and it showed that rendering phrase expression according to the note sequence leads to more accurate performance by 9.2 points in spectral distance in comparison with a rendering method that ignores the note sequence.

Original languageEnglish
Title of host publicationMM'09 - Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums
Pages203-212
Number of pages10
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event17th ACM International Conference on Multimedia, MM'09, with Co-located Workshops and Symposiums - Beijing, China
Duration: 2009 Oct 192009 Oct 24

Publication series

NameMM'09 - Proceedings of the 2009 ACM Multimedia Conference, with Co-located Workshops and Symposiums

Conference

Conference17th ACM International Conference on Multimedia, MM'09, with Co-located Workshops and Symposiums
Country/TerritoryChina
CityBeijing
Period09/10/1909/10/24

Keywords

  • Music manipulation
  • Performance rendering
  • Signal processing
  • Sound source extraction
  • Timbre mixing

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Software

Fingerprint

Dive into the research topics of 'Changing timbre and phrase in existing musical performances as you like - Manipulations of single part using harmonic and inharmonic models'. Together they form a unique fingerprint.

Cite this