Semantic Segmentation in Learned Compressed Domain

Jinming Liu*, Heming Sun*, Jiro Katto

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)


Most machine vision tasks (e.g., semantic segmentation) are based on images encoded and decoded by image compression algorithms (e.g., JPEG). However, these decoded images in the pixel domain introduce distortion, and they are optimized for human perception, making the performance of machine vision tasks suboptimal. In this paper, we propose a method based on the compressed domain to improve segmentation tasks. i) A dynamic and a static channel selection method are proposed to reduce the redundancy of compressed representations that are obtained by encoding. ii) Two different transform modules are explored and analyzed to help the compressed representation be transformed as the features in the segmentation network. The experimental results show that we can save up to 15.8% bitrates compared with a state-of-the-art compressed domain-based work while saving up to about 83.6% bitrates and 44.8% inference time compared with the pixel domain-based method.

Original languageEnglish
Title of host publication2022 Picture Coding Symposium, PCS 2022 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages5
ISBN (Electronic)9781665492577
Publication statusPublished - 2022
Event2022 Picture Coding Symposium, PCS 2022 - San Jose, United States
Duration: 2022 Dec 72022 Dec 9

Publication series

Name2022 Picture Coding Symposium, PCS 2022 - Proceedings


Conference2022 Picture Coding Symposium, PCS 2022
Country/TerritoryUnited States
CitySan Jose


  • Channel selection
  • Compressed domain
  • Deep learning
  • Image compression

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Signal Processing


Dive into the research topics of 'Semantic Segmentation in Learned Compressed Domain'. Together they form a unique fingerprint.

Cite this