PhotoApp: Photorealistic appearance editing of head portraits

Photorealistic appearance editing of head portraits (Teaser Image)
Mallikarjun B R1 Ayush Tewari1 Abdallah Dib2 Tim Weyrich3 Bernd Bickel4 Hans-Peter Seidel1 Hanspeter Pfister5 Wojciech Matusik6 Louis Chevallier2 Mohamed Elgharib1 Christian Theobalt1
1Max Planck Institute for Informatics 2InterDigital 3University College London 4IST Austria 5Harvard University 6MIT CSAIL
ACM Transactions on Graphics 40(4) (SIGGRAPH 2021)

Abstract

Photorealistic editing of head portraits is a challenging task as humans are very sensitive to inconsistencies in faces. We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination (parameterised with an environment map) in a portrait image. This requires our method to capture and control the full reflectance field of the person in the image. Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages. Such datasets are expensive to acquire, not readily available and do not capture all the rich variations of in-the-wild portrait images. In addition, most supervised approaches only focus on relighting, and do not allow camera viewpoint editing. Thus, they only capture and control a subset of the reflectance field. Recently, portrait editing has been demonstrated by operating in the generative model space of StyleGAN. While such approaches do not require direct supervision, there is a significant loss of quality when compared to the supervised approaches. In this paper, we present a method which learns from limited supervised training data. The training images only include people in a fixed neutral expression with eyes closed, without much hair or background variations. Each person is captured under 150 one-light-at-a-time conditions and under 8 camera poses. Instead of training directly in the image space, we design a supervised problem which learns transformations in the latent space of StyleGAN. This combines the best of supervised learning and generative adversarial modeling. We show that the StyleGAN prior allows for generalisation to different expressions, hairstyles and backgrounds. This produces high-quality photorealistic results for in-the-wild images and significantly outperforms existing methods. Our approach can edit the illumination and pose simultaneously, and runs at interactive rates.

Resources

Citation

@article{mallikarjun2021photoapp,
  title={PhotoApp: Photorealistic appearance editing of head portraits},
  author={Mallikarjun, BR and Tewari, Ayush and Dib, Abdallah and Weyrich, Tim and Bickel, Bernd and Seidel, Hans Peter and Pfister, Hanspeter and Matusik, Wojciech and Chevallier, Louis and Elgharib, Mohamed A and others},
  journal={ACM Transactions on Graphics},
  volume={40},
  number={4},
  year={2021}
}

Acknowledgements

This work was supported by the ERC Consolidator Grant 4DReply (770784). We also acknowledge support from Technicolor and Interdigital