TurboDiffusion
Hello, Phr00t ! Your work is magnificent, and thank you so much for all your hard work. An interesting variation of the van was recently released. Are there any plans to create an AIO with it?
https://huggingface.co/TurboDiffusion/TurboWan2.2-I2V-A14B-720P
I don't get the big deal with that "model".
It is just using Sage Attention, quantization and step/cfg distillation LORAs (rCM specifically) to claim its "speedups", things already widely used and/or part of this "all in one" already. It gets huge speedup numbers because it is comparing itself to the full, unquantized model running at full steps and >1 CFG. It mentions something about "sparse linear attention", unsure if that actually does anything substantial not already covered. Unless I'm missing something, there is nothing actually worth investigating with TurboWan as far as this "all in one" is concerned.
What type of LOADER do you use?