64 kbit/s video coding equipment using adaptive tree‐search vector quantization

Hiroshi Watanabe*, Ryohei Hoshino, Hideo Kuroda, Hideo Hashimoto

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


This paper proposes a high‐efficiency coding algorithm realization of a 64 kbit/s video coding equipment. A prototype equipment constructed shows a high quality performance for video conferencing and videophones. For high efficiency coding for video conferencing a coding algorithm combined with motion‐compensated interframe prediction and the orthogonal transform coding or vector quantization is usually employed. It is known theoretically that the latter has an excellent characteristic which approaches the rate‐distortion bound. However, another coding system is desired which not only has a high coding efficiency, but a small‐scale hardware and easy control of information generation. This paper proposes an adaptive tree‐search vector quantization in which the size of a code book can be varied according to the local property of an image. The improvement of the coding efficiency due to this idea is demonstrated by a computer simulation compared with a conventional system with a fixed size code book. A 64 kbit/s video‐coding system having the resolution of 384 (horizontal) × 240 (vertical) pixels per frame was constructed. Using the coding system with the proposed algorithm, a temporal resolution of 5 to 7.5 frames/s was obtained, confirming the satisfactory results for practical applications such as video conferencing and videophones.

Original languageEnglish
Pages (from-to)35-44
Number of pages10
JournalElectronics and Communications in Japan (Part I: Communications)
Issue number9
Publication statusPublished - 1989 Sept
Externally publishedYes

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Electrical and Electronic Engineering


Dive into the research topics of '64 kbit/s video coding equipment using adaptive tree‐search vector quantization'. Together they form a unique fingerprint.

Cite this