TY - GEN
T1 - Cosmetic features extraction by a single image makeup decomposition
AU - Yamagishi, Kanami
AU - Yamamoto, Shintaro
AU - Kato, Takuya
AU - Morishima, Shigeo
N1 - Funding Information:
This work was supported by JST ACCEL Grant Number JPMJAC1602, Japan.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/13
Y1 - 2018/12/13
N2 - In recent years, a large number of makeup images have been shared on social media. Most of these images lack information about the cosmetics used, such as color, glitter or etc., while they are difficult to infer due to the diversity of skin color or lighting conditions. In this paper, our goal is to estimate cosmetic features only from a single makeup image. Previous work has measured the material parameters of cosmetic products from pairs of images showing the face with and without makeup, but such comparison images are not always available. Furthermore, this method cannot represent local effects such as pearl or glitter since they adapted physically-based reflectance models. We propose a novel image-based method to extract cosmetic features considering both color and local effects by decomposing the target image into makeup and skin color using Difference of Gaussian (DoG). Our method can be applied for single, standalone makeup images, and considers both local effects and color. In addition, our method is robust to the skin color difference due to the decomposition separating makeup from skin. The experimental results demonstrate that our method is more robust to skin color difference and captures characteristics of each cosmetic product.
AB - In recent years, a large number of makeup images have been shared on social media. Most of these images lack information about the cosmetics used, such as color, glitter or etc., while they are difficult to infer due to the diversity of skin color or lighting conditions. In this paper, our goal is to estimate cosmetic features only from a single makeup image. Previous work has measured the material parameters of cosmetic products from pairs of images showing the face with and without makeup, but such comparison images are not always available. Furthermore, this method cannot represent local effects such as pearl or glitter since they adapted physically-based reflectance models. We propose a novel image-based method to extract cosmetic features considering both color and local effects by decomposing the target image into makeup and skin color using Difference of Gaussian (DoG). Our method can be applied for single, standalone makeup images, and considers both local effects and color. In addition, our method is robust to the skin color difference due to the decomposition separating makeup from skin. The experimental results demonstrate that our method is more robust to skin color difference and captures characteristics of each cosmetic product.
UR - http://www.scopus.com/inward/record.url?scp=85060842073&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85060842073&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2018.00248
DO - 10.1109/CVPRW.2018.00248
M3 - Conference contribution
AN - SCOPUS:85060842073
T3 - IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
SP - 1965
EP - 1967
BT - Proceedings - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
PB - IEEE Computer Society
T2 - 31st Meeting of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2018
Y2 - 18 June 2018 through 22 June 2018
ER -