抄録
Music generation task is commonly considered as a note-by-note prediction problem. Moreover, prediction models generating one musical note at a time may ignore the overall coherence because the music phrase is incomplete and unable to demonstrate musicality. To address these issues, in this study, we propose a feasible monophonic music generation framework that can simulate subsequent trends for each predicted musical note. The framework generates a musical note mainly in three steps: 1) a sequence prediction model is used to predict the most potential candidates, 2) the subsequent trends for each candidate are modeled and evaluated, and 3) the best candidate is selected as the final result. We use the Monte-Carlo tree search algorithm because of its great capability of discovering near-optimal results. We establish a method of training a value network that can assess musical coherence to evaluate the simulated sequences. Further, we used a smoothed polynomial upper confidence trees algorithm to improve the accuracy and efficiency of the search process. An accurate dataset labeled by us, which contains 36 transcribed samples from real-world pop songs, was used to validate our framework. Compared with the note-by-note sequence prediction model, our framework exhibits a better sense of musicality. Our framework can be applied to generate symbolic monophonic music, particularly the main melody track in pop music.
本文言語 | English |
---|---|
ジャーナル | IEEE Transactions on Multimedia |
DOI | |
出版ステータス | Accepted/In press - 2022 |
ASJC Scopus subject areas
- 信号処理
- メディア記述
- コンピュータ サイエンスの応用
- 電子工学および電気工学