Mode Seeking meets Mean Seeking for Fast Long Video Generation


In this episode, we discuss Mode Seeking meets Mean Seeking for Fast Long Video Generation by Shengqu Cai, Weili Nie, Chao Liu, Julius Berner, Lvmin Zhang, Nanye Ma, Hansheng Chen, Maneesh Agrawala, Leonidas Guibas, Gordon Wetzstein, Arash Vahdat. The paper presents a novel training paradigm combining mode seeking and mean seeking to decouple local video fidelity from long-term coherence using a Decoupled Diffusion Transformer. It employs a global Flow Matching head trained on limited long videos for narrative structure and a local Distribution Matching head aligned with a frozen short-video teacher to ensure local realism. This approach enables fast synthesis of minute-scale videos that maintain both high-quality local details and coherent long-range motion, significantly improving the fidelity–horizon trade-off.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *