TY - JOUR
T1 - DeepRemaster
T2 - Temporal source-reference attention networks for comprehensive video enhancement
AU - Iizuka, Satoshi
AU - Simo-Serra, Edgar
N1 - Funding Information:
This work was partially supported by JST ACT-I (Iizuka, Grant Number: JPMJPR16U3), JST PRESTO (Simo-Serra, Grant Number: JPMJPR1756), and JST CREST (Iizuka and Simo-Serra, Grant Number: JPMJCR14D1).
Publisher Copyright:
© 2019 Copyright held by the owner/author(s).
PY - 2019/11
Y1 - 2019/11
N2 - The remastering of vintage film comprises of a diversity of sub-tasks including super-resolution, noise removal, and contrast enhancement which aim to restore the deteriorated film medium to its original state. Additionally, due to the technical limitations of the time, most vintage film is either recorded in black and white, or has low quality colors, for which colorization becomes necessary. In this work, we propose a single framework to tackle the entire remastering task semi-interactively. Our work is based on temporal convolutional neural networks with attention mechanisms trained on videos with data-driven deterioration simulation. Our proposed source-reference attention allows the model to handle an arbitrary number of reference color images to colorize long videos without the need for segmentation while maintaining temporal consistency. Quantitative analysis shows that our framework outperforms existing approaches, and that, in contrast to existing approaches, the performance of our framework increases with longer videos and more reference color images.
AB - The remastering of vintage film comprises of a diversity of sub-tasks including super-resolution, noise removal, and contrast enhancement which aim to restore the deteriorated film medium to its original state. Additionally, due to the technical limitations of the time, most vintage film is either recorded in black and white, or has low quality colors, for which colorization becomes necessary. In this work, we propose a single framework to tackle the entire remastering task semi-interactively. Our work is based on temporal convolutional neural networks with attention mechanisms trained on videos with data-driven deterioration simulation. Our proposed source-reference attention allows the model to handle an arbitrary number of reference color images to colorize long videos without the need for segmentation while maintaining temporal consistency. Quantitative analysis shows that our framework outperforms existing approaches, and that, in contrast to existing approaches, the performance of our framework increases with longer videos and more reference color images.
KW - Colorization
KW - Convolutional network
KW - Remastering
KW - Restoration
KW - Source-reference attention
UR - http://www.scopus.com/inward/record.url?scp=85078901547&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85078901547&partnerID=8YFLogxK
U2 - 10.1145/3355089.3356570
DO - 10.1145/3355089.3356570
M3 - Article
AN - SCOPUS:85078901547
SN - 0730-0301
VL - 38
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 6
M1 - 176
ER -