TY - GEN
T1 - Improving Latent Quantization of Learned Image Compression with Gradient Scaling
AU - Sun, Heming
AU - Yu, Lu
AU - Katto, Jiro
N1 - Funding Information:
This work was supported in part by JST, PRESTO Grant Number JP-MJPR19M5, Japan; in part by JSPS, Grant Number 21K17770; in part by Kenjiro Takayanagi Foundation; in part by NICT, Japan Grant Number 03801. Corresponding author: Heming Sun, hemingsun@aoni.waseda.jp
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Learned image compression (LIC) has shown its superior compression ability. Quantization is an inevitable stage to generate quantized latent for the entropy coding. To solve the non-differentiable problem of quantization in the training phase, many differentiable approximated quantization methods have been proposed. However, the derivative of quantized latent to non-quantized latent are set as one in most of the previous methods. As a result, the quantization error between non-quantized and quantized latent is not taken into consideration in the gradient descent. To address this issue, we exploit the gradient scaling method to scale the gradient of non-quantized latent in the back-propagation. The experimental results show that we can outperform the recent LIC quantization methods.
AB - Learned image compression (LIC) has shown its superior compression ability. Quantization is an inevitable stage to generate quantized latent for the entropy coding. To solve the non-differentiable problem of quantization in the training phase, many differentiable approximated quantization methods have been proposed. However, the derivative of quantized latent to non-quantized latent are set as one in most of the previous methods. As a result, the quantization error between non-quantized and quantized latent is not taken into consideration in the gradient descent. To address this issue, we exploit the gradient scaling method to scale the gradient of non-quantized latent in the back-propagation. The experimental results show that we can outperform the recent LIC quantization methods.
KW - Gradient scaling
KW - Learned image compression
KW - Quantization
UR - http://www.scopus.com/inward/record.url?scp=85147250275&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147250275&partnerID=8YFLogxK
U2 - 10.1109/VCIP56404.2022.10008823
DO - 10.1109/VCIP56404.2022.10008823
M3 - Conference contribution
AN - SCOPUS:85147250275
T3 - 2022 IEEE International Conference on Visual Communications and Image Processing, VCIP 2022
BT - 2022 IEEE International Conference on Visual Communications and Image Processing, VCIP 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Conference on Visual Communications and Image Processing, VCIP 2022
Y2 - 13 December 2022 through 16 December 2022
ER -