Attention Based Facial Expression Manipulation

摘要

Facial expression manipulation has two objectives: 1) generating an image with target expression; 2) preserving the identity information of the original image as much as possible. Recently, Generative Adversarial Networks (GANs) have shown the abilities for fine-grained facial expression manipulation. However, current methods are still prone to generate images with poor quality. In this work, we propose a U-Net based generator with multi-attention gate for facial expression manipulation. The multi-level attention mechanism is helpful to manipulate relevant regions and preserve identity features, thus improving the editing ability. Furthermore, we adopt self-attention block to replace direct skip-connection to get long-range dependency in images. To suppress artifacts in generated images, we add a discriminator based loss function in the training process. Extensive experiments on both quantitative and qualitative evaluation show that our proposed method achieves better performance for facial expression manipulation.

出版物
2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
付宇卓
付宇卓
教授 博士生导师