DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models

SIGGRAPH 2024 (Journal)
Zhiyao Sun1, Tian Lv1, Sheng Ye1, Matthieu Lin1, Jenny Sheng1, Yu-Hui Wen2, Minjing Yu3, Yong-Jin Liu1
teaser

DiffPoseTalk generates diverse and stylistic 3D facial motions with head poses from speech.

Abstract

The generation of stylistic 3D facial animations driven by speech poses a significant challenge as it requires learning a many-to-many mapping between speech, style, and the corresponding natural facial motion. However, existing methods either employ a deterministic model for speech-to-motion mapping or encode the style using a one-hot encoding scheme. Notably, the one-hot encoding approach fails to capture the complexity of the style and thus limits generalization ability. In this paper, we propose DiffPoseTalk, a generative framework based on the diffusion model combined with a style encoder that extracts style embeddings from short reference videos. During inference, we employ classifier-free guidance to guide the generation process based on the speech and style. We extend this to include the generation of head poses, thereby enhancing user perception. Additionally, we address the shortage of scanned 3D talking face data by training our model on reconstructed 3DMM parameters from a high-quality, in-the-wild audio-visual dataset. Our extensive experiments and user study demonstrate that our approach outperforms state-of-the-art methods. The code and dataset are at https://diffposetalk.github.io.

Method

pipeline

Left: Transformer-based denoising network. Right: Speaking style encoder.

Video

BibTeX

@article{sun2024diffposetalk,
  title={DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models},
  author={Sun, Zhiyao and Lv, Tian and Ye, Sheng and Lin, Matthieu and Sheng, Jenny and Wen, Yu-Hui and Yu, Minjing and Liu, Yong-Jin},
  journal={ACM Transactions on Graphics (TOG)},
  doi={10.1145/3658221},
  volume={43},
  number={4},
  articleno={46},
  numpages={9},
  year={2024},
  publisher={ACM New York, NY, USA}
}