Topic-based generation of keywords and caption for video content

Masanao Okamoto*, Kiichi Hasegawa, Sho Sobue, Akira Nakamura, Satoshi Tamura, Satoru Hayamizu

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

1 Citation (Scopus)

Abstract

This paper studies usage of both keywords and captions in one scene for video content. Captions show the spoken content and are renewed in a sentence unit. A method is proposed to extract keywords automatically from transcribed texts. The method estimates topic boundary, extracts keywords by Latent Dirichlet Allocation (LDA) and presents them in speech balloon captioning system. The proposed method is evaluated by experiments from the viewpoint of easy to view and helpfulness to understand the video content. Adding keywords and captions obtained favorable scores by subjective assessments.

Original languageEnglish
Pages605-608
Number of pages4
Publication statusPublished - 2010
Externally publishedYes
Event2nd Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2010 - Biopolis, Singapore
Duration: 2010 Dec 142010 Dec 17

Conference

Conference2nd Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2010
Country/TerritorySingapore
CityBiopolis
Period10/12/1410/12/17

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems

Fingerprint

Dive into the research topics of 'Topic-based generation of keywords and caption for video content'. Together they form a unique fingerprint.

Cite this