TY - GEN
T1 - RSGAN
T2 - ACM SIGGRAPH 2018 Posters - International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2018
AU - Natsume, Ryota
AU - Yatagawa, Tatsuya
AU - Morishima, Shigeo
N1 - Funding Information:
This study was granted in part by the Strategic Basic Research Program ACCEL of the Japan Science and Technology Agency (JPMJAC1602). Tatsuya Yatagawa was supported by a Research Fellowship for Young Researchers of Japan's Society for the Promotion of Science (16J02280). Shigeo Morishima was supported by a Grant-in-Aid from Waseda Institute of Advanced Science and Engineering. The authors would like to acknowledge NVIDIA Corporation for providing their GPUs in the academic GPU Grant Program.
Publisher Copyright:
© 2018 Copyright held by the owner/author(s).
PY - 2018/8/12
Y1 - 2018/8/12
N2 - This abstract introduces a generative neural network for face swapping and editing face images. We refer to this network as "region-separative generative adversarial network (RSGAN)". In existing deep generative models such as Variational autoencoder (VAE) and Generative adversarial network (GAN), training data must represent what the generative models synthesize. For example, image inpainting is achieved by training images with and without holes. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because faces of real people cannot be swapped without surgical operations. We tackle this problem by training the network so that it synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. In addition to face swapping, the proposed network can be applied to other editing applications, such as visual attribute editing and random face parts synthesis.
AB - This abstract introduces a generative neural network for face swapping and editing face images. We refer to this network as "region-separative generative adversarial network (RSGAN)". In existing deep generative models such as Variational autoencoder (VAE) and Generative adversarial network (GAN), training data must represent what the generative models synthesize. For example, image inpainting is achieved by training images with and without holes. However, it is difficult or even impossible to prepare a dataset which includes face images both before and after face swapping because faces of real people cannot be swapped without surgical operations. We tackle this problem by training the network so that it synthesizes synthesize a natural face image from an arbitrary pair of face and hair appearances. In addition to face swapping, the proposed network can be applied to other editing applications, such as visual attribute editing and random face parts synthesis.
KW - Face
KW - Face swapping
KW - Image editing
KW - Portrait
UR - http://www.scopus.com/inward/record.url?scp=85054809781&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85054809781&partnerID=8YFLogxK
U2 - 10.1145/3230744.3230818
DO - 10.1145/3230744.3230818
M3 - Conference contribution
AN - SCOPUS:85054809781
SN - 9781450358170
T3 - ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018
BT - ACM SIGGRAPH 2018 Posters, SIGGRAPH 2018
PB - Association for Computing Machinery, Inc
Y2 - 12 August 2018 through 16 August 2018
ER -