💜 Wan | 🖥️ GitHub | 🤗 Hugging Face | 🤖 ModelScope | 📑 Paper (Coming soon) | 📑 Blog | 💬 WeChat Group | 📖 Discord
Wan: Open and Advanced Large-Scale Video Generative Models
In this repository, we present Wan2.1, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation. Wan2.1 offers these key features:
- 👍 SOTA Performance: Wan2.1 consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
- 👍 Supports Consumer-grade GPUs: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
- 👍 Multiple Tasks: Wan2.1 excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
- 👍 Visual Text Generation: Wan2.1 is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
- 👍 Powerful Video VAE: Wan-VAE delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
video_demo_v2.mp4
- Mar 3, 2025: 👋 Wan2.1's T2V and I2V have been integrated into Diffusers (T2V | I2V). Feel free to give it a try!
- Feb 27, 2025: 👋 Wan2.1 has been integrated into ComfyUI. Enjoy!
- Feb 25, 2025: 👋 We've released the inference code and weights of Wan2.1.
If your work has improved Wan2.1 and you would like more people to see it, please inform us.
- TeaCache now supports Wan2.1 acceleration, capable of increasing speed by approximately 2x. Feel free to give it a try!
- DiffSynth-Studio provides more support for Wan2.1, including video-to-video, FP8 quantization, VRAM optimization, LoRA training, and more. Please refer to their examples.
- Wan2.1 Text-to-Video
- Multi-GPU Inference code of the 14B and 1.3B models
- Checkpoints of the 14B and 1.3B models
- Gradio demo
- ComfyUI integration
- Diffusers integration
- Diffusers + Multi-GPU Inference
- Wan2.1 Image-to-Video
- Multi-GPU Inference code of the 14B model
- Checkpoints of the 14B model
- Gradio demo
- ComfyUI integration
- Diffusers integration
- Diffusers + Multi-GPU Inference
Clone the repo:
git clone https://github.com/Wan-Video/Wan2.1.git
cd Wan2.1
Install dependencies:
# Ensure torch >= 2.4.0
pip install -r requirements.txt
Models | Download Link | Notes |
---|---|---|
T2V-14B | 🤗 Huggingface 🤖 ModelScope | Supports both 480P and 720P |
I2V-14B-720P | 🤗 Huggingface 🤖 ModelScope | Supports 720P |
I2V-14B-480P | 🤗 Huggingface 🤖 ModelScope | Supports 480P |
T2V-1.3B | 🤗 Huggingface 🤖 ModelScope | Supports 480P |
💡Note: The 1.3B model is capable of generating videos at 720P resolution. However, due to limited training at this resolution, the results are generally less stable compared to 480P. For optimal performance, we recommend using 480P resolution.
Download models using huggingface-cli:
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-T2V-14B --local-dir ./Wan2.1-T2V-14B
Download models using modelscope-cli:
pip install modelscope
modelscope download Wan-AI/Wan2.1-T2V-14B --local_dir ./Wan2.1-T2V-14B
This repository supports two Text-to-Video models (1.3B and 14B) and two resolutions (480P and 720P). The parameters and configurations for these models are as follows:
Task | Resolution | Model | |
---|---|---|---|
480P | 720P | ||
t2v-14B | ✔️ | ✔️ | Wan2.1-T2V-14B |
t2v-1.3B | ✔️ | ❌ | Wan2.1-T2V-1.3B |
To facilitate implementation, we will start with a basic version of the inference process that skips the prompt extension step.
- Single-GPU inference
python generate.py --task t2v-14B --size 1280*720 --ckpt_dir ./Wan2.1-T2V-14B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
If you encounter OOM (Out-of-Memory) issues, you can use the --offload_model True
and --t5_cpu
options to reduce GPU memory usage. For example, on an RTX 4090 GPU:
python generate.py --task t2v-1.3B --size 832*480 --ckpt_dir ./Wan2.1-T2V-1.3B --offload_model True --t5_cpu --sample_shift 8 --sample_guide_scale 6 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
💡Note: If you are using the
T2V-1.3B
model, we recommend setting the parameter--sample_guide_scale 6
. The--sample_shift parameter
can be adjusted within the range of 8 to 12 based on the performance.
Multi-GPU inference using FSDP + xDiT USP
We use FSDP and xDiT USP to accelerate inference.
Ulysess Strategy
If you want to use
Ulysses
strategy, you should set--ulysses_size $GPU_NUMS
. Note that thenum_heads
should be divisible byulysses_size
if you wish to useUlysess
strategy. For the 1.3B model, thenum_heads
is12
which can't be divided by 8 (as most multi-GPU machines have 8 GPUs). Therefore, it is recommended to useRing Strategy
instead.Ring Strategy
If you want to use
Ring
strategy, you should set--ring_size $GPU_NUMS
. Note that thesequence length
should be divisible byring_size
when using theRing
strategy.
Of course, you can also combine the use of
Ulysses
andRing
strategies.
pip install "xfuser>=0.4.1"
torchrun --nproc_per_node=8 generate.py --task t2v-14B --size 1280*720 --ckpt_dir ./Wan2.1-T2V-14B --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage."
Extending the prompts can effectively enrich the details in the generated videos, further enhancing the video quality. Therefore, we recommend enabling prompt extension. We provide the following two methods for prompt extension:
- Use the Dashscope API for extension.
- Apply for a
dashscope.api_key
in advance (EN | CN). - Configure the environment variable
DASH_API_KEY
to specify the Dashscope API key. For users of Alibaba Cloud's international site, you also need to set the environment variableDASH_API_URL
to 'https://dashscope-intl.aliyuncs.com/api/v1'. For more detailed instructions, please refer to the dashscope document. - Use the
qwen-plus
model for text-to-video tasks andqwen-vl-max
for image-to-video tasks. - You can modify the model used for extension with the parameter
--prompt_extend_model
. For example:
- Apply for a
DASH_API_KEY=your_key python generate.py --task t2v-14B --size 1280*720 --ckpt_dir ./Wan2.1-T2V-14B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage" --use_prompt_extend --prompt_extend_method 'dashscope' --prompt_extend_target_lang 'zh'
Using a local model for extension.
- By default, the Qwen model on HuggingFace is used for this extension. Users can choose Qwen models or other models based on the available GPU memory size.
- For text-to-video tasks, you can use models like
Qwen/Qwen2.5-14B-Instruct
,Qwen/Qwen2.5-7B-Instruct
andQwen/Qwen2.5-3B-Instruct
. - For image-to-video tasks, you can use models like
Qwen/Qwen2.5-VL-7B-Instruct
andQwen/Qwen2.5-VL-3B-Instruct
. - Larger models generally provide better extension results but require more GPU memory.
- You can modify the model used for extension with the parameter
--prompt_extend_model
, allowing you to specify either a local model path or a Hugging Face model. For example:
python generate.py --task t2v-14B --size 1280*720 --ckpt_dir ./Wan2.1-T2V-14B --prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage" --use_prompt_extend --prompt_extend_method 'local_qwen' --prompt_extend_target_lang 'zh
from https://github.com/Wan-Video/Wan2.1
(https://github.com/bakhti-ai/Wan2.1)
No comments:
Post a Comment