arxiv preprint – DreaMoving: A Human Video Generation Framework based on Diffusion Models


In this episode we discuss DreaMoving: A Human Video Generation Framework based on Diffusion Models
by Mengyang Feng, Jinlin Liu, Kai Yu, Yuan Yao, Zheng Hui, Xiefan Guo, Xianhui Lin, Haolan Xue, Chen Shi, Xiaowen Li, Aojie Li, Xiaoyang Kang, Biwen Lei, Miaomiao Cui, Peiran Ren, Xuansong Xie. DreaMoving is a framework that uses diffusion models to create customized human dance videos, where a target person can be seen performing specific dance moves. It consists of two main components: the Video ControlNet, which oversees motion control, and the Content Guider, which ensures the target individual’s identity is maintained throughout the video. The framework is designed to be user-friendly and flexible, allowing for a wide range of video styles and is further detailed on its project page.


Posted

in

by

Tags: