Alibaba Wan2.1 Open-Sourced! 10K+ Stars! Ultra-Realistic & Smooth AI Model
Alibaba's Wan2.1 AI video model: Open-source, 10K+ GitHub stars. Generate ultra-smooth HD videos from just first & last frames!
"AI Disruption" Publication 5900 Subscriptions 20% Discount Offer Link.
The Wan2.1 video model, recently open-sourced by Alibaba’s Tongyi Wanxiang, has quickly gained attention in the open-source community due to its outstanding technical performance and broad application potential.
With the support of the Wan2.1 text-to-video large model, Tongyi Wanxiang simultaneously released an image-to-video model and a compact 1.3B parameter model.
This series of updates not only enriches the toolkit for content creation but also provides more flexible options for various application scenarios. As of now, Wan2.1 has surpassed 10,000 GitHub stars and accumulated over 2.2 million downloads across the web.
On April 17, Wanxiang released another exciting update—the first-and-last-frame video generation model is now officially open-sourced!
Users only need to provide the first and last frames, and the model can automatically generate smooth and seamless transition effects, allowing the visuals to naturally evolve between the starting and ending points.
Built on the Wan2.1 text-to-video 14B large model, Wanxiang’s first-and-last-frame model supports the generation of 5-second 720p high-definition videos, offering creators a more efficient and flexible video production method.