Perceptual Quality Driven Adaptive Video Coding Using JND Estimation

Masaru Takeuchi*, Shintaro Saika, Yusuke Sakamoto, Tatsuya Nagashima, Zhengxue Cheng, Kenji Kanai, Jiro Katto, Kaijin Wei, Ju Zengwei, Xu Wei

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Citations (Scopus)

Abstract

We introduce a perceptual video quality driven video encoding solution for optimized adaptive streaming. By using multiple bitrate/resolution encoding like MPEG-DASH, video streaming services can deliver the best video stream to a client, under the conditions of the client's available bandwidth and viewing device capability. However, conventional fixed encoding recipes (i.e., resolution-bitrate pairs) suffer from many problems, such as improper resolution selection and stream redundancy. To avoid these problems, we propose a novel video coding method, which generates multiple representations with constant JustNoticeable Difference (JND) interval. For this purpose, we developed a JND scale estimator using Support Vector Regression (SVR), and designed a pre-encoder which outputs an encoding recipe with constant JND interval in an adaptive manner to input video.

Original languageEnglish
Title of host publication2018 Picture Coding Symposium, PCS 2018 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages179-183
Number of pages5
ISBN (Print)9781538641606
DOIs
Publication statusPublished - 2018 Sept 5
Event33rd Picture Coding Symposium, PCS 2018 - San Francisco, United States
Duration: 2018 Jun 242018 Jun 27

Publication series

Name2018 Picture Coding Symposium, PCS 2018 - Proceedings

Other

Other33rd Picture Coding Symposium, PCS 2018
Country/TerritoryUnited States
CitySan Francisco
Period18/6/2418/6/27

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology

Fingerprint

Dive into the research topics of 'Perceptual Quality Driven Adaptive Video Coding Using JND Estimation'. Together they form a unique fingerprint.

Cite this