Jessie A Ellis
Feb 27, 2026 18:05
NVIDIA gives free GPU-accelerated endpoints for Alibaba’s 397B parameter Qwen3.5 vision-language mannequin, enabling builders to construct multimodal AI brokers.
NVIDIA has rolled out free GPU-accelerated endpoints for Alibaba’s Qwen3.5 vision-language mannequin, giving builders instant entry to the 397 billion parameter system via Blackwell structure {hardware}. The transfer positions each tech giants to seize the rising marketplace for multimodal AI brokers able to understanding and navigating consumer interfaces.
The Qwen3.5 mannequin, which Alibaba launched on February 16, 2026, represents a big architectural shift in giant language fashions. Regardless of its large 397B whole parameters, solely 17 billion activate per ahead move—a 4.28% activation fee achieved via a hybrid mixture-of-experts (MoE) design mixed with Gated Delta Networks. This effectivity interprets to actual price financial savings: Alibaba claims the system runs 60% cheaper and handles giant workloads eight occasions extra effectively than its predecessor.
Technical Specs Value Noting
The mannequin helps an enter context size of 256K tokens, extensible to 1 million—sufficient to course of roughly two hours of video content material natively. It handles 200+ languages and runs 512 consultants per layer, with 11 consultants (10 routed plus 1 shared) activated per token throughout 60 layers.
Builders can entry Qwen3.5 via NVIDIA’s construct.nvidia.com platform with free registration within the NVIDIA Developer Program. The API follows OpenAI-compatible conventions, making integration easy for groups already working with related tool-calling patterns.
Manufacturing Deployment Choices
For enterprises transferring past experimentation, NVIDIA NIM packages the mannequin as containerized inference microservices. These can run on-premises, in cloud environments, or throughout hybrid deployments. The NeMo framework gives fine-tuning capabilities for domain-specific functions—NVIDIA particularly highlights a medical visible QA tutorial demonstrating radiological dataset coaching.
Alibaba has continued increasing the Qwen3.5 household for the reason that preliminary launch. On February 24, the corporate pushed out three further variants: Qwen3.5-122B-A10B, Qwen3.5-35B-A3B, and Qwen3.5-27B, providing smaller footprint choices for various deployment situations.
Alibaba, buying and selling with a market cap round $372 billion as of February 27, has positioned Qwen3.5 in opposition to GPT-5.2, Claude Opus 4.5, and Gemini 3 Professional on benchmark efficiency. The open-weight fashions stay out there on Hugging Face Hub and ModelScope for builders preferring self-hosting over NVIDIA’s managed endpoints.
Picture supply: Shutterstock


