WanGP by DeepBeepMeep : The best Open Source Video Generative Models Accessible to the GPU Poor
WanGP supports the Wan (and derived models), Hunyuan Video and LTV Video models with:
Discord Server to get Help from Other Users and show your Best Videos: https://discord.gg/g7efUW9jGV
Follow DeepBeepMeep on Twitter/X to get the Latest News: https://x.com/deepbeepmeep

WanGP version has the following perks: 3D pose Preprocessing entirely rewritten to be fast, and compatible with any pytorch version, very Low VRAM requirements for multicharacters, experimental long gen mode / sliding windows (SCAIL Preview doesnt support officialy long gen yet)
pi-Flux 2: you don't use Flux 2 because you find it too slow ? You won't be able to use this excuse anymore: pi-Flux 2 is 4 steps distills of the best image generator. It supports both image edition and text to image generation.
Zandinksy v5 : for the video models collectors among you, you can try the Zandinsky model families, the 2B model quality is especially impressive given its small size
Qwen Image Layered: a new Qwen Image variant that lets you extract RGBA layers of your images so that each layer can be edited separately
Qwen Image Edit Plus 2511: Qwen Image Edit Plus 2511 improves identity preservation (especially at 1080p) and integrates out of the box popular effects such as religthing and camera changes
loras accelerator: loras accelerator for Wan 2.2 t2v and Wan 2.1 i2v have been added (activable using the Profile settings as usual)
update 9.91: added Kandinsky 5 & Qwen Image Layered
update 9.92: added Qwen Image Edit Plus 2511
These two features are going to change the life of many people:
Pause Button: ever had a urge to use your GPU for a very important task that can't wait (a game for instance ?), here comes your new friend the Pause button. Not only it will suspend the current gen in progress but it will free most of the VRAM used by WanGP (please note that the RAM by WanGP used wont be released). When you are done just click the Resume button to restart exactly from where you stopped.
WanGP Headless: trouble running remotely WanGP or having some stability issues with Gradio or your Web Browser. This is all past thanks to WanGP Headless mode. Here is how it works : first make you shopping list of Video Gen using the classic WanGP gradio interface. When you are done, click the Save Queue button and quit WanGP.
Then in your terminal window just write this:
python wgp.py --process my_queue.zip
With WanGP 9.82, you can also process settings file (.json file exported using th Export Settings button):
python wgp.py --process my_settings.json
Processing Settings can be useful to do some quick gen / testing if you don't need to provide source image files (otherwise you will need to fill the paths to Start Images, Ref Image, ...)
{date(YYYY-MM-DD_HH-mm-ss)}_{seed}_{prompt(50)}, {num_inference_steps}
Hunyuan Video 1.5 i2v distilled : for those in need of their daily dose of new models, added Hunyuan Video 1.5 i2v Distilled (official release) + Lora Accelerator extracted from it (to be used in future finetunes). Also added Magcache support (optimized for 20 steps) for Hunyuan Video 1.5.
Wan-Move : Another model specialized to control motion using a Start Image and Trajectories. According to the author's paper it is the best one. Motion Designer has been upgraded to generate also trajectories for Wan-Move.
Z-Image Control Net v2 : This is an upgrade of Z-Image Control Net. It offers much better results but requires much more processing an VRAM. But don't panic yet, as it was VRAM optimized. It was not an easy trick as this one is complex. It has also Inpainting support,but I need more info to release this feature.
update 9.81: added Hunyuan Video 1.5 i2v distilled + magcache
update 9.82: added Settings headless processing, output file customization, refactored Task edition and queue processing
update 9.83: Qwen Edit+ upgraded: no more any zoom out at 1080p, enabled mask, enabled image refs with inpainting
update 9.84: added Wan-Move support
update 9.85: added Z-Image Control net v2
update 9.86: added NAG support for Z-Image
The only snag is that it is a 60B parameters for the Transformer part and 40B parameters for the Text Encoder part.
Behold the WanGP Miracle ! Flux 2 wil work with only 8 GB of VRAM if you are happy with 8 bits quantization (no need for lower quality 4bits). With 9GB of VRAM you can run the model at full power. You will need at least 64 GB of RAM. If not maybe Memory Profile 5 will be your friend.
With WanGP v9.74, Flux 2 Control Net hidden power has also been unleashed from the vanilla model. You can now enjoy Flux 2 Inpainting and Pose transfer. This can be combined with Image Refs to get the best Identity Preservation / Face Swapping an Image Model can offer: just target the effect to a specific area using a Mask and set Denoising Strength to 0.9-1.0 and Masking Strength to 0.3-0.4 for a perfect blending
While waiting for Z-Image edit, WanGP 9.74 offers now support for Z-Image Fun Control Net. You can use it for Pose transfer, Canny Edge transfer. Don't be surprised if it is a bit slower. Please note it will work best at 1080p and will require a minimum of 9 steps.
I have added a new Memory Profile Profile 4+ that is sligthly slower than Profile 4 but can save you up to 1GB of VRAM with Flux 2.
Also as we have now quite few models and Loras folders. I have moved all the loras folder in the 'loras' folder. There are also now unique subfolders for Wan 5B and Wan 1.3B models. A conversion script should have moved the loras in the right locations, but I advise that you check just in case.
update 9.71 : added missing source file, have fun !
update 9.72 : added Z-Image & Loras reorg
update 9.73 : added Steady Dancer
update 9.74 : added Z-Image Fun Control Net & Flux 2 Control Net + Masking
So here is Tencet who is back in the race: let's welcome Hunyuan Video 1.5
Despite only 8B parameters it offers quite a high level of quality. It is not just one model but a family of models:
Each model comes on day one with several finetunes specialized for a specific resolution. The downside right now is that to get the best quality you need to use guidance > 1 and a high number of Steps (20+).
But dont go away yet ! LightX2V (https://huggingface.co/lightx2v/Hy1.5-Distill-Models/) is on deck and has already delivered an Accelerated 4 steps Finetune for the t2v 480p model. It is part of today's delivery.
I have extracted LighX2V Magic into an 8 steps Accelerator Lora that seems to work for i2v and the other resolutions. This should be good enough while waiting for other the official LighX2V releases (just select this lora in the Settings Dropdown Box).
WanGP implementation of Hunyuan 1.5 is quite complete as you will get straight away Video Gen Preview (WanGP exclusivity!) and Sliding Window support. It is also ready for Tea Cache or Mag Cache (just waiting for the official parameters)
WanGP Hunyuan 1.5 is super VRAM optimized, you will need less than 20 GB of VRAM to generate 12s (289 frames) at 720p.
Please note Hunyuan v1 Loras are not compatible since the latent space is different. You can add loras for Hunyuan Video 1.5 in the loras_hunyuan/1.5 folder.
Update 9.62 : Added Lora Accelerator
Update 9.61 : Added VAE Temporal Tiling
In this release WanGP turns you into a Motion Master:
Motion Designer: this new preinstalled home made Graphical Plugin will let you design trajectories for Vace and for Wan 2.2 i2v Time to Move.
Vace Motion: this is a less known feature of the almighty Vace (this was last Vace feature not yet implemented in WanGP), just put some moving rectangles in your Control Video (in Vace raw format) and you will be able to move around people / objects or even the camera. The Motion Designer will let you create these trajectories in only a few clicks.
Wan 2.2 i2v Time to Move: a few brillant people (https://github.com/time-to-move/TTM) discovered that you could steer the motion of a model such as Wan 2.2 i2v without changing its weights. You just need to apply specific Control and Mask videos. The Motion Designer has an i2v TTM mode that will let you generate the videos in the right format. The way it works is that using a Start Image you are going to define objects and their corresponding trajectories. For best results, it is recommended to provide as well a Background Image which is the Start Image without the objects you are moving (use Qwen for that). TTM works with Loras Accelerators.
TTM Suggested Settings: Lightning i2v v1.0 2 Phases (8 Steps), Video to Video, Denoising Strenght 0.9, Masking Strength 0.1. I will upload Sample Settings later in the Settings Channel
PainterI2V: (https://github.com/princepainter/). You found that the i2v loras accelerators kill the motion ? This is an alternative to 3 phases guidance to restore motion, it is free as it doesnt require any extra processing or changing the weights. It works best in a scene where the background remains the same. In order to control the acceleration in i2v models, you will find a new Motion Amplitude slider in the Quality tab.
Nexus 1.3B: this is an incredible Wan 2.1 1.3B finetune made by @Nexus. It is specialized in Human Motion (dance, fights, gym, ...). It is fast as it is already Causvid accelerated. Try it with the Prompt Enhancer at 720p.
Black Start Frames for Wan 2.1/2.2 i2v: some i2v models can be turned into powerful t2v models by providing a black frame as a Start Frame. From now on if you dont provide any start frame, WanGP will generate automatically a black start frame of the current output resolution or of the correspondig End frame resolution (if any).
update 9.51: Fixed Chrono Edit Output, added Temporal Reasoning Video
update 9.52: Black start frames support for Wan i2v models
VAE Upsampler for Wan 2.1/2.2 Text 2 Image and Qwen Image: spacepxl has tweaked the VAE Decoder used by Wan & Qwen so that it can decode and upsample x2 at the same time. The end Result is a Fast High Quality Image Upsampler (much better than Lanczos). Check the Postprocessing Tab / Spatial Upsampling Dropdown box. Unfortunately this will work only with Image Generation, no support yet for Video Generation. I have also added a VAE Refiner that keeps the existing resolution but slightly improves the details.
Mocha: a very requested alternative to Wan Animate . Use this model to replace a person in a control video. For best results you will need to provide two reference images for the new the person, the second image should be a face close up. This model seems to be optimized to generate 81 frames. First output frame is often messed up. Lightx2v t2v 4 steps Lora Accelarator works well. Please note this model is VRAM hungry, for 81 frames to generate it will process internaly 161 frames.
Lucy Edit v1.1: a new version (finetune) has been released. Not sure yet if I like it better than the original one. In theory it should work better with changing the background setting for instance.
Ovi 1.1: This new version exists in two flavors 5s & 10s ! Thanks to WanGP VRAM optimisations only 8 GB will be only needed for a 10s generation. Beware, the Prompt syntax has slightly changed since an audio background is now introduced using "Audio:" instead of using tags.
Top Models Selection: if you are new to WanGP or are simply lost among the numerous models offered by WanGP, just check the updated Guides tab. You will find a list of highlighted models and advice about how & when to use them.
update 9.41: Added Mocha & Lucy Edit 1.1
update 9.42: Added Ovi 1.1
update 9.43: Improved Linux support: no more visual artifacts with fp8 finetunes, auto install ffmpeg, detect audio device, ...
update 9.44: Added links to highlighted models in Guide tab
Chrono Edit: a new original way to edit an Image. This one will generate a Video will that performs the full edition work and return the last Image. It can be hit or a miss but when it works it is quite impressive. Please note you must absolutely use the Prompt Enhancer on your Prompt Instruction because this model expects a very specific format. The Prompt Enhancer for this model has a specific System Prompt to generate the right Chrono Edit Prompt.
LyCoris support: preliminary basic Lycoris support for this Lora format. At least Qwen Multi Camera should work (https://huggingface.co/dx8152/Qwen-Edit-2509-Multiple-angles). If you have a Lycoris that does not work and it may be interesting please mention it in the Request Channel
i2v Enhanced Lightning v2 (update 9.37): added this impressive Finetune in the default selection of models, not only it is accelerated (4 steps), but it is very good at following camera and timing instructions.
This finetune loves long prompts. Therefore to increase the prompt readability WanGP supports now multilines prompts (in option).
update 9.35: Added a Sample PlugIn App that shows how to collect and modify settings from a PlugIn
update 9.37: Added i2v Enhanced Lightning
WanGP exclusive: VRAM requirements have never been that low !
Wan 2.2 Ovi 10 GB for all the GPU Poors of the World: only 6 GB of VRAM to generate 121 frames at 720p. With 16 GB of VRAM, you may even be able to load all the model in VRAM with Memory Profile 3
To get the x10 speed effect just apply the FastWan Lora Accelerator that comes prepackaged with Ovi (acccessible in the dropdown box Settings at the top)
After thorough testing it appears that Pytorch 2.8 is causing RAM memory leaks when switching models as it won't release all the RAM. I could not find any workaround. So the default Pytorch version to use with WanGP is back to Pytorch 2.7 Unless you want absolutely to use Pytorch compilation which is not stable with Pytorch 2.7 with RTX 50xx , it is recommended to switch back to Pytorch 2.7.1 (tradeoff between 2.8 and 2.7):
cd Wan2GP
conda activate wan2gp
pip install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128
You will need to reinstall SageAttention FlashAttnetion, ...
update v9.21: Got FastWan to work with Ovi: it is now 10 times faster ! (not including the VAE)
update v9.25: added Chroma Radiance october edition + reverted to pytorch 2.7
With WanGP v9 you will have enough features to go to a desert island with no internet connection and comes back with a full Hollywood movie.
First here are the new models supported:
Upgraded Features:
Huge Kudos & Thanks to Tophness that has outdone himself with these Great Features:
WanGP v9 is now targetting Pytorch 2.8 although it should still work with 2.7, don't forget to upgrade by doing:
pip install torch==2.8.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128
You will need to upgrade Sage Attention or Flash (check the installation guide)
Update info: you might have some git error message while upgrading to v9 if WanGP is already installed. Sorry about that if that's the case, you will need to reinstall WanGP. There are two different ways to fix this issue while still preserving your data:
cd installation_path_of_wangp git fetch origin && git reset --hard origin/main pip install -r requirements.txt
See full changelog: Changelog
One-click installation:
Get started instantly with Pinokio App
It is recommended to use in Pinokio the Community Scripts wan2gp or wan2gp-amd by Morpheus rather than the official Pinokio install.
Use Redtash1 One Click Install with Sage
Manual installation:
git clone https://github.com/deepbeepmeep/Wan2GP.git
cd Wan2GP
conda create -n wan2gp python=3.10.9
conda activate wan2gp
pip install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128
pip install -r requirements.txt
Run the application:
python wgp.py
First time using WanGP ? Just check the Guides tab, and you will find a selection of recommended models to use.
Update the application: If using Pinokio use Pinokio to update otherwise: Get in the directory where WanGP is installed and:
git pull conda activate wan2gp pip install -r requirements.txt
if you get some error messages related to git, you may try the following (beware this will overwrite local changes made to the source code of WanGP):
git fetch origin && git reset --hard origin/main conda activate wan2gp pip install -r requirements.txt
Run headless (batch processing):
Process saved queues without launching the web UI:
# Process a saved queue
python wgp.py --process my_queue.zip
Create your queue in the web UI, save it with "Save Queue", then process it headless. See CLI Documentation for details.
For Debian-based systems (Ubuntu, Debian, etc.):
./run-docker-cuda-deb.sh
This automated script will:
Docker environment includes:
Supported GPUs: RTX 40XX, RTX 30XX, RTX 20XX, GTX 16XX, GTX 10XX, Tesla V100, A100, H100, and more.
For detailed installation instructions for different GPU generations:
For detailed installation instructions for different GPU generations:
Made with ❤️ by DeepBeepMeep