Object recognition with luminance, rotation and location invariance

Takami Satonaka*, Takaaki Baba, Tatsuo Otsuki, Takao Chikamura, Teresa H. Meng

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

In this paper we propose a neural network based on image synthesis, histogram adaptive quantization and the discrete cosine transformation (DCT) for object recognition with luminance, rotation and location invariance. An efficient representation of the invariant features is constructed using a three-dimensional memory structure. The performance of luminance and rotation invariance is illustrated by reduced error rates in face recognition. The error rate of using two-dimensional DCT is improved from 13.6% to 2.4% with the aid of the proposed image synthesis procedure. The 2.4% error rate is better than all previously reported results using Karhunen-Loeve transform convolution networks and eigenface models. In using DCT, our approach also enjoys the additional advantage of greatly reduced computational complexity.

Original languageEnglish
Title of host publicationIEEE International Conference on Image Processing
Place of PublicationLos Alamitos, CA, United States
PublisherIEEE Comp Soc
Pages336-339
Number of pages4
Volume3
Publication statusPublished - 1997
Externally publishedYes
EventProceedings of the 1997 International Conference on Image Processing. Part 2 (of 3) - Santa Barbara, CA, USA
Duration: 1997 Oct 261997 Oct 29

Other

OtherProceedings of the 1997 International Conference on Image Processing. Part 2 (of 3)
CitySanta Barbara, CA, USA
Period97/10/2697/10/29

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Object recognition with luminance, rotation and location invariance'. Together they form a unique fingerprint.

Cite this