LogoDynamiCtrl: Rethinking the Basic Structure and the Role of Text for High-quality Human Image Animation

Haoyu Zhao, Zhongang Qi, Cong Wang, Qingping Zheng, Guansong Lu, Fei Chen, Hang Xu, Zuxuan Wu
Fudan University; Huawei Noah' Ark Lab

(Logo Sound on!) DynamiCtrl is a next-generation video synthesis model for human image animation, built on the MM-DiT architecture. With the DynamiCtrl model, you can animate whole-body, half-body, or human head movements. You can recreate classic scenes from movies using your own photos, customize the background using textual prompts, and achieve digital human applications.
(Loading too slow? Click: DynamiCtrl on YouTube.)

Abstract

Human image animation has recently gained significant attention due to advancements in generative models. However, existing methods still face two major challenges: (1) architectural limitations—most models rely on U-Net, which underperforms compared to the MM-DiT; and (2) the neglect of textual information, which can enhance controllability. In this work, we introduce DynamiCtrl, a novel framework that not only explores different pose-guided control structures in MM-DiT, but also reemphasizes the crucial role of text in this task. Specifically, we employ a Shared VAE encoder for both reference images and driving pose videos, eliminating the need for an additional pose encoder and simplifying the overall framework. To incorporate pose features into the full attention blocks, we propose Pose-adaptive Layer Norm (PadaLN), which utilizes adaptive layer normalization to encode sparse pose features. The encoded features are directly added to the visual input, preserving the spatiotemporal consistency of the backbone while effectively introducing pose control into MM-DiT. Furthermore, within the full attention mechanism, we align textual and visual features to enhance controllability. By leveraging text, we not only enable fine-grained control over the generated content, but also, for the first time, achieve simultaneous control over both background and motion. Experimental results verify the superiority of DynamiCtrl on benchmark datasets, demonstrating its strong identity preservation, heterogeneous character driving, background controllability, and high-quality synthesis.

Human Animation












Method Overview:

Mixed Video-Image Finetuning

DynamiCtrl is a novel framework that enhances pose-guided video synthesis in MM-DiT by emphasizing the role of text. It uses a Shared VAE encoder for both reference images and driving pose videos, simplifying the framework by removing the need for a separate pose encoder. DynamiCtrl introduces Pose-adaptive Layer Norm to inject sparse pose features into the model while maintaining spatiotemporal consistency. It also aligns textual and visual features within the full attention, enabling fine-grained control over both background and motion for the first time.

BibTeX

@article{zhao2025dynamictrl,
  title={DynamiCtrl: Rethinking the Basic Structure and the Role of Text for High-quality Human Image Animation},
  author={Haoyu, Zhao and Zhongang, Qi and Cong, Wang and Qingping, Zheng and Guansong, Lu and Fei, Chen and Hang, Xu and Zuxuan, Wu},
  journal=Arxiv},
  year={2025}
}