TY - GEN
T1 - FSNet
T2 - 14th Asian Conference on Computer Vision, ACCV 2018
AU - Natsume, Ryota
AU - Yatagawa, Tatsuya
AU - Morishima, Shigeo
N1 - Funding Information:
Acknowledgments. This study was granted in part by the Strategic Basic Research Program ACCEL of the Japan Science and Technology Agency (JPMJAC1602). Tat-suya Yatagawa was supported by the Research Fellowship for Young Researchers of Japan’s Society for the Promotion of Science (16J02280). Shigeo Morishima was supported by a Grant-in-Aid from Waseda Institute of Advanced Science and Engineering. The authors would also like to acknowledge NVIDIA Corporation for providing their GPUs in the academic GPU Grant Program.
Publisher Copyright:
© 2019, Springer Nature Switzerland AG.
PY - 2019
Y1 - 2019
N2 - This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.
AB - This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.
KW - Convolutional neural networks
KW - Deep generative models
KW - Face swapping
UR - http://www.scopus.com/inward/record.url?scp=85066959271&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85066959271&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-20876-9_8
DO - 10.1007/978-3-030-20876-9_8
M3 - Conference contribution
AN - SCOPUS:85066959271
SN - 9783030208752
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 117
EP - 132
BT - Computer Vision – ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers
A2 - Mori, Greg
A2 - Li, Hongdong
A2 - Schindler, Konrad
A2 - Jawahar, C.V.
PB - Springer Verlag
Y2 - 2 December 2018 through 6 December 2018
ER -