Video degradation model and its application to character recognition in e-learning videos

Jun Sun*, Yutaka Katsuyama, Satoshi Naoi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

As the rapid popularization of digital imaging equipment, video character recognition becomes more and more important. Compared with traditional scanned document, characters in video document usually suffer from great degradation and meet trouble in recognition. Thus, a systematically study of video degradation will be very useful for video OCR. In this paper, a video degradation model is proposed to imitate the process of video character image generation. The generated character images are used to make synthetic dictionaries to improve the recognition performance of real degraded characters in e-Learning videos. Experiments on 24317 e-Learning video characters prove the effectiveness of our method.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
EditorsSimone Marinai, Andreas Dengel
PublisherSpringer Verlag
Pages555-558
Number of pages4
ISBN (Print)3540230602
DOIs
Publication statusPublished - 2004
Externally publishedYes

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3163
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Video degradation model and its application to character recognition in e-learning videos'. Together they form a unique fingerprint.

Cite this