A fine-grain scalable and low memory cost variable block size motion estimation architecture for H.264/AVC

Zhenyu Liu*, Yang Song, Takeshi Ikenaga, Satoshi Goto

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

One full search variable block size motion estimation (VBSME) architecture with integer pixel accuracy is proposed in this paper. This proposed architecture has following features: (1) Through widening data path from the search area memories, m processing element groups (PEG) could be scheduled to work in parallel and fully utilized, where m is a factor of sixteen. Each PEG has sixteen processing elements (PE) and just costs 8.5K gates. This feature provides users more flexibility to make tradeoff between the hardware cost and the performance. (2) Based on pipelining and multi-cycle data path techniques, this architecture can work at high clock frequency. (3) The memory partition number is greatly reduced. When sixteen PEGs are adopted, only two memory partitions are required for the search area data storage. Therefore, both the system hardware cost and power consumption can be saved. A 16-PEG design with 48 × 32 search range has been implemented with TSMC 0.18 μm CMOS technology. In typical work conditions, its maximum clock frequency is 261 MHz. Compared with the previous 2-D architecture [9], about 13.4 hardware cost and 5.7 power consumption can be saved.

Original languageEnglish
Pages (from-to)1928-1936
Number of pages9
JournalIEICE Transactions on Electronics
VolumeE89-C
Issue number12
DOIs
Publication statusPublished - 2006 Dec

Keywords

  • AVC
  • H.264
  • VLSI architecture
  • Variable block size motion estimation

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'A fine-grain scalable and low memory cost variable block size motion estimation architecture for H.264/AVC'. Together they form a unique fingerprint.

Cite this