Enhanced Intra Prediction for Video Coding by Using Multiple Neural Networks

Heming Sun, Zhengxue Cheng, Masaru Takeuchi, Jiro Katto

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

This paper enhances the intra prediction by using multiple neural network modes (NM). Each NM serves as an end-To-end mapping from the neighboring reference blocks to the current coding block. For the provided NMs, we present two schemes (appending and substitution) to integrate the NMs with the traditional modes (TM) defined in high efficiency video coding (HEVC). For the appending scheme, each NM is corresponding to a certain range of TMs. The categorization of TMs is based on the expected prediction errors. After determining the relevant TMs for each NM, we present a probability-Aware mode signaling scheme. The NMs with higher probabilities to be the best mode are signaled with fewer bits. For the substitution scheme, we propose to replace the highest and lowest probable TMs. New most probable mode (MPM) generation method is also employed when substituting the lowest probable TMs. Experimental results demonstrate that using multiple NMs will improve the coding efficiency apparently compared with the single NM. Specifically, proposed appending scheme with seven NMs can save 2.6%, 3.8%, and 3.1% BD-rate for Y, U, and V components compared with using single NM in the state-of-The-Art works.

Original languageEnglish
Article number8947942
Pages (from-to)2764-2779
Number of pages16
JournalIEEE Transactions on Multimedia
Volume22
Issue number11
DOIs
Publication statusPublished - 2020 Nov

Keywords

  • High efficiency video coding (HEVC)
  • intra prediction
  • neural network
  • probability

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Enhanced Intra Prediction for Video Coding by Using Multiple Neural Networks'. Together they form a unique fingerprint.

Cite this