抄録
In this paper, we propose a new time-reduction method for video skimming in which the focus is on the overall playback time. While fast-forwarding is a natural way to check whether or not items are of interest, the sound is not synchronized with the images and the lack of comprehensible audio data means that we must work from the images alone. The focus in video summarization has been solely on video segmentation, i.e. building a structure that represents the parts and flow of meaning in the video. In our system, the user simply specifies the running time required for the summarized video. We describe the current state of our prototype system and its results in testing, which show how well it works.
本文言語 | English |
---|---|
ページ(範囲) | 178-186 |
ページ数 | 9 |
ジャーナル | Proceedings of SPIE - The International Society for Optical Engineering |
巻 | 5305 |
DOI | |
出版ステータス | Published - 2004 |
外部発表 | はい |
イベント | Multimedia Computing and Networking 2004 - San Jose, CA, United States 継続期間: 2004 1月 21 → 2004 1月 22 |
ASJC Scopus subject areas
- 電子材料、光学材料、および磁性材料
- 凝縮系物理学
- コンピュータ サイエンスの応用
- 応用数学
- 電子工学および電気工学