VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model

Anonymous Authors,
Anonymous Institute

Abstract

Identity-consistent video generation seeks to synthesize videos that are guided by both textual prompts and reference images of entities. Current approaches typically utilize cross-attention layers to integrate the appearance of the entity, which predominantly captures semantic attributes, resulting in compromised fidelity of entities. Moreover, these methods necessitate iterative fine-tuning for each new entity encountered, thereby limiting their applicability. To address these challenges, we introduce VideoAssembler, a novel end-to-end framework for identity-consistent video generation that can conduct inference directly when encountering new entities. VideoAssembler is adept at producing videos that are not only flexible with respect to the input reference entities but also responsive to textual conditions. Additionally, by modulating the quantity of input images for the entity, VideoAssembler enables the execution of tasks ranging from image-to-video generation to sophisticated video editing. VideoAssembler comprises two principal components: the Reference Entity Pyramid (REP) encoder and the Entity-Prompt Attention Fusion (EPAF) module. The REP encoder is designed to infuse comprehensive appearance details into the denoising stages of the stable diffusion model. Concurrently, the EPAF module is utilized to integrate text-aligned features effectively. Furthermore, to mitigate the challenge of scarce data, we present a methodology for the preprocessing of training data. Our evaluation of the VideoAssembler framework on the UCF-101, MSR-VTT, and DAVIS datasets indicates that it achieves good performances in both quantitative and qualitative analyses (346.84 in FVD and 48.01 in IS on UCF-101).

Generation with Input Entities

Generation with Input Entity

Editing with Input Video

Method Overview:

Mixed Video-Image Finetuning

The training pipeline of our VideoAssembler method. The model can generate high-fidelity videos according to the given entities and text prompts. We train all the attention layers encompassed within the U-Net, while maintaining the VAE and CLIP models frozen.

BibTeX

@article{anonymous2023videoassembler,
  title={VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model},
  author={Anonymous Authors},
  journal={https://arxiv.org/abs/2311.17338},
  year={2023}
}