Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web

Tomoyasu Nakano, Sora Murofushi, Masataka Goto, Shigeo Morishima

Research output: Contribution to conferencePaperpeer-review

14 Citations (Scopus)


We propose a dance video authoring system, DanceReProducer, that can automatically generate a dance video clip appropriate to a given piece of music by segmenting and concatenating existing dance video clips. In this paper, we focus on the reuse of ever-increasing user-generated dance video clips on a video sharing web service. In a video clip consisting of music (audio signals) and image sequences (video frames), the image sequences are often synchronized with or related to the music. Such relationships are diverse in different video clips, but were not dealt with by previous methods for automatic music video generation. Our system employs machine learning and beat tracking techniques to model these relationships. To generate new music video clips, short image sequences that have been previously extracted from other music clips are stretched and concatenated so that the emerging image sequence matches the rhythmic structure of the target song. Besides automatically generating music videos, DanceReProducer offers a user interface in which a user can interactively change image sequences just by choosing different candidates. This way people with little knowledge or experience in MAD movie generation can interactively create personalized video clips.

Original languageEnglish
Publication statusPublished - 2011 Jan 1
Event8th Sound and Music Computing Conference, SMC 2011 - Padova, Italy
Duration: 2011 Jul 62011 Jul 9


Conference8th Sound and Music Computing Conference, SMC 2011

ASJC Scopus subject areas

  • Computer Science(all)


Dive into the research topics of 'Dancereproducer: An automatic mashup music video generation system by reusing dance video clips on the web'. Together they form a unique fingerprint.

Cite this