To refer to this page use:
|Abstract:||This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.|
|Citation:||Chang, Huiwen, Jingwan Lu, Fisher Yu, and Adam Finkelstein. "PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup." In IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): pp. 40-48. doi:10.1109/CVPR.2018.00012|
|Pages:||40 - 48|
|Type of Material:||Conference Article|
|Journal/Proceeding Title:||IEEE/CVF Conference on Computer Vision and Pattern Recognition|
Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.