Skip to content
Massachusetts Institute of Technology
  • on: Feb. 10, 2025
  • in: arXiv

History-Guided Video Diffusion

  • Kiwhan Song *
  • Boyuan Chen *
  • Max Simchowitz
  • Yilun Du
  • Russ Tedrake
  • Vincent Sitzmann
*
shared first author
@inproceedings{song2025historyguidedvideodiffusion,
    title = { History-Guided Video Diffusion },
    author = { Song, Kiwhan and 
               Chen, Boyuan and 
               Simchowitz, Max and 
               Du, Yilun and 
               Tedrake, Russ and 
               Sitzmann, Vincent },
    year = { 2025 },
    booktitle = { arXiv },
}
  • Copy to Clipboard

TL;DR: Diffuse long videos by performing guidance over different histories, enabled by Diffusion Forcing Transformer, a simple finetunable add-on to any existing sequence diffusion models.

Abstract

Classifier-free guidance (CFG) is a key technique for improving conditional generation in diffusion models, enabling more accurate control while enhancing sample quality. It is natural to extend this technique to video diffusion, which generates video conditioned on a variable number of context frames, collectively referred to as history. However, we find two key challenges to guiding with variable-length history: architectures that only support fixed-size conditioning, and the empirical observation that CFG-style history dropout performs poorly. To address this, we propose the Diffusion Forcing Transformer (DFoT), a video diffusion architecture and theoretically grounded training objective that jointly enable conditioning on a flexible number of history frames. We then introduce History Guidance, a family of guidance methods uniquely enabled by DFoT. We show that its simplest form, vanilla history guidance, already significantly improves video generation quality and temporal consistency. A more advanced method, history guidance across time and frequency further enhances motion dynamics, enables compositional generalization to out-of-distribution history, and can stably roll out extremely long videos.