Luisa Crawford
Apr 03, 2026 21:53
Alibaba’s Wan 2.7 AI video mannequin hits Collectively AI with text-to-video now dwell, image-to-video and modifying instruments coming quickly at aggressive pricing.
Collectively AI has rolled out Alibaba’s Wan 2.7 video technology mannequin on its cloud platform, pricing the text-to-video functionality at $0.10 per second of generated footage. The deployment marks the primary main cloud availability for the four-model suite that Alibaba launched in late March.
The text-to-video mannequin, accessible by way of the endpoint Wan-AI/wan2.7-t2v, helps 720p and 1080p decision with outputs starting from 2 to fifteen seconds. Audio enter can drive technology, and multi-shot narrative management works straight by way of immediate language—a significant improve over fundamental prompt-to-video methods that drive creators into fragmented workflows.
What’s Really Transport
Proper now, solely text-to-video is dwell. Collectively AI says image-to-video and reference-to-video capabilities are “coming quickly,” with video modifying instruments to observe.
The image-to-video mannequin will assist first-frame, first-and-last-frame, and continuation technology—helpful for storyboarding workflows. A 3×3 grid-to-video function targets groups constructing structured content material from static belongings.
Reference-to-video will get extra attention-grabbing for manufacturing work. It will settle for each reference photos and reference movies as inputs, dealing with multi-character interactions and complicated scene composition at as much as 1080p for 10-second clips.
The Enhancing Play
Video Edit, the fourth mannequin within the suite, addresses what’s arguably the most important ache level in AI video: the lack to revise with out ranging from scratch. Collectively AI’s implementation will assist instruction-based modifying by way of textual content, reference image-based modifications, model switch, and temporal function cloning—movement, digital camera work, results lifted from supply media.
For artistic groups, holding these capabilities inside one API floor eliminates the handoff chaos that at the moment plagues AI video manufacturing. Most workflows right now contain producing in a single instrument, modifying in one other, and manually patching the outcomes.
Aggressive Positioning
The $0.10 per second pricing places Collectively AI in placing distance of opponents, although direct comparisons rely closely on decision and length parameters. Wan 2.7 itself has drawn consideration since its March launch—evaluations have referred to as it probably the strongest AI video mannequin of 2026, although some skepticism concerning the hype stays.
Alibaba constructed Wan 2.7 inside its Qwen ecosystem, and earlier variations (2.1 and a couple of.2) had been open-sourced. Whether or not 2.7 follows that path hasn’t been confirmed, however the mannequin is now accessible by way of a number of cloud suppliers together with Atlas Cloud and WaveSpeedAI alongside Collectively AI.
Integration Particulars
For builders already on Collectively AI’s platform, including video technology requires no new authentication or billing setup. The identical SDKs work throughout textual content, picture, and video inference. The corporate provides serverless endpoints for growth with quantity pricing out there for manufacturing workloads.
Groups evaluating the expertise can check straight in Collectively AI’s playground earlier than committing to API integration. Full documentation covers parameters together with audio inputs, decision management, and the polling loop required for asynchronous video technology jobs.
Picture supply: Shutterstock


