TY - JOUR
T1 - Globally and locally consistent image completion
AU - Iizuka, Satoshi
AU - Simo-Serra, Edgar
AU - Ishikawa, Hiroshi
N1 - Funding Information:
This work was partially supported by JST ACT-I Grant Number JPMJPR16U3 and JST CREST Grant Number JPMJCR14D1. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2017 Copyright held by the owner/author(s). 0730-0301/2017/7-ART107 $15.00 DOI: http://dx.doi.org/10.1145/3072959.3073659
PY - 2017
Y1 - 2017
N2 - We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.
AB - We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.
KW - Convolutional neural network
KW - Image completion
UR - http://www.scopus.com/inward/record.url?scp=85030751884&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85030751884&partnerID=8YFLogxK
U2 - 10.1145/3072959.3073659
DO - 10.1145/3072959.3073659
M3 - Conference article
AN - SCOPUS:85030751884
SN - 0730-0301
VL - 36
JO - ACM Transactions on Graphics
JF - ACM Transactions on Graphics
IS - 4
M1 - 107
T2 - ACM SIGGRAPH 2017
Y2 - 30 July 2017 through 3 August 2017
ER -