Continuously Controllable Facial Expression Editing in Talking Face Videos

TAFFC 2023
Zhiyao Sun1, Yu-Hui Wen2, Tian Lv1, Yanan Sun1, Ziyang Zhang3, Yaoyuan Wang3, Yong-Jin Liu1
1Tsinghua University 2Beijing Jiaotong University 3Huawei
teaser

Our method transforms a neutral video into an emotional video with continuously controllable expressions.

Abstract

Recently audio-driven talking face video generation has attracted considerable attention. However, very few researches address the issue of emotional editing of these talking face videos with continuously controllable expressions, which is a strong demand in the industry. The challenge is that speech-related expressions and emotion-related expressions are often highly coupled. Meanwhile, traditional image-to-image translation methods cannot work well in our application due to the coupling of expressions with other attributes such as poses, i.e., translating the expression of the character in each frame may simultaneously change the head pose due to the bias of the training data distribution. In this paper, we propose a high-quality facial expression editing method for talking face videos, allowing the user to control the target emotion in the edited video continuously. We present a new perspective for this task as a special case of motion information editing, where we use a 3DMM to capture major facial movements and an associated texture map modeled by a StyleGAN to capture appearance details. Both representations (3DMM and texture map) contain emotional information and can be continuously modified by neural networks and easily smoothed by averaging in coefficient/latent spaces, making our method simple yet effective. We also introduce a mouth shape preservation loss to control the trade-off between lip synchronization and the degree of exaggeration of the edited expression. Extensive experiments and a user study show that our method achieves state-of-the-art performance across various evaluation criteria.

Method

pipeline

Videos

Demo


Additional Results

BibTeX

@article{sun2023fee4tv,
  author={Zhiyao Sun and Yu-Hui Wen and Tian Lv and Yanan Sun and Ziyang Zhang and Yaoyuan Wang and Yong-Jin Liu},
  journal={IEEE Transactions on Affective Computing}, 
  title={Continuously Controllable Facial Expression Editing in Talking Face Videos}, 
  year={2023 (forthcoming)},
  doi={10.1109/TAFFC.2023.3334511}
}