Low-dimensional superpixel descriptor and its application in visual correspondence estimation

Songlin Du*, Takeshi Ikenaga

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


Establishing local visual correspondence between video frames is an important and challenging problem in many vision based applications. Local keypoint detection and description based pixel-level matching is a typical way for visual correspondence estimation. Unlike traditional local keypoint descriptor based methods, this paper proposes a comprehensive yet low-dimensional local feature descriptor based on superpixels generated by over segmentation. The proposed local feature descriptor extracts shape feature, texture feature, and color feature from superpixels by orientated center-boundary distance (OCBD), gray-level co-occurrence matrix (GLCM), and saturation histogram (SHIST), respectively. The types of features are more comprehensive than existing descriptors which extract only one specific kind of feature. Experimental results on the widely used Middlebury optical flow dataset prove that the proposed superpixel descriptor achieves triple accuracy compared with the state-of-the-art ORB descriptor which has the same dimension of features with the proposed one. In addition, since the dimension of the proposed superpixel descriptor is low, it is convenient for matching and memory-efficient for hardware implementation.

Original languageEnglish
Pages (from-to)19457-19472
Number of pages16
JournalMultimedia Tools and Applications
Issue number14
Publication statusPublished - 2019 Jul 30


  • Low-dimensional feature
  • Superpixel descriptor
  • Visual correspondence estimation

ASJC Scopus subject areas

  • Software
  • Media Technology
  • Hardware and Architecture
  • Computer Networks and Communications


Dive into the research topics of 'Low-dimensional superpixel descriptor and its application in visual correspondence estimation'. Together they form a unique fingerprint.

Cite this