DiffusionBrowser: Interactive Diffusion Previews via Multi-Branch Decoders
Abstract
DiffusionBrowser enables interactive video preview generation and control during the denoising process, enhancing user experience and revealing model composition details.
Video diffusion models have revolutionized generative video synthesis, but they are imprecise, slow, and can be opaque during generation -- keeping users in the dark for a prolonged period. In this work, we propose DiffusionBrowser, a model-agnostic, lightweight decoder framework that allows users to interactively generate previews at any point (timestep or transformer block) during the denoising process. Our model can generate multi-modal preview representations that include RGB and scene intrinsics at more than 4times real-time speed (less than 1 second for a 4-second video) that convey consistent appearance and motion to the final video. With the trained decoder, we show that it is possible to interactively guide the generation at intermediate noise steps via stochasticity reinjection and modal steering, unlocking a new control capability. Moreover, we systematically probe the model using the learned decoders, revealing how scene, object, and other details are composed and assembled during the otherwise black-box denoising process.
Community
Project page: https://susunghong.github.io/DiffusionBrowser
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CtrlVDiff: Controllable Video Generation via Unified Multimodal Video Diffusion (2025)
- GeoVideo: Introducing Geometric Regularization into Video Generation Model (2025)
- DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation (2025)
- UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback (2025)
- InstantViR: Real-Time Video Inverse Problem Solver with Distilled Diffusion Prior (2025)
- V-RGBX: Video Editing with Accurate Controls over Intrinsic Properties (2025)
- V-Warper: Appearance-Consistent Video Diffusion Personalization via Value Warping (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper