Treatment of laser pointer and speech information in lecture scene retrieval

Wataru Nakano*, Takashi Kobayashi, Yutaka Katsuyama, Satoshi Naoi, Haruo Yokota

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

We have previously proposed a unified presentation contents search mechanism named UPRISE (Unified Presentation Slide Retrieval by Impression Search Engine), and have also proposed a method to use laser pointer information in lecture scene retrieval. In this paper, we discuss the treatment of the laser pointer and speech information, and propose two methods to filter the laser pointer information using keyword occurrence in slides and speech. We also propose weighting schemata with filtered laser pointer information using slide text and speech information. We evaluate our approach by using actual lecture videos and presentation slides.

Original languageEnglish
Title of host publicationISM 2006 - 8th IEEE International Symposium on Multimedia
Pages927-932
Number of pages6
DOIs
Publication statusPublished - 2006
Externally publishedYes
EventISM 2006 - 8th IEEE International Symposium on Multimedia - San Diego, CA, United States
Duration: 2006 Dec 112006 Dec 13

Publication series

NameISM 2006 - 8th IEEE International Symposium on Multimedia

Conference

ConferenceISM 2006 - 8th IEEE International Symposium on Multimedia
Country/TerritoryUnited States
CitySan Diego, CA
Period06/12/1106/12/13

ASJC Scopus subject areas

  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Treatment of laser pointer and speech information in lecture scene retrieval'. Together they form a unique fingerprint.

Cite this