Peter Zhang
Feb 24, 2025 15:55
NVIDIA AI Enterprise now helps the H200 NVL GPU, enhancing AI infrastructure with improved efficiency and effectivity. The replace consists of new software program parts for accelerated AI workloads.
NVIDIA has introduced a big replace to its NVIDIA AI Enterprise platform, now incorporating help for the NVIDIA H200 NVL GPU. This development is a part of the newest launch of the corporate’s infrastructure software program, geared toward enhancing enterprise-level AI purposes. The H200 NVL, a brand new addition to NVIDIA’s information heart GPU lineup, guarantees to ship cutting-edge capabilities for agentic and generative AI, in line with NVIDIA.
NVIDIA AI Enterprise Platform
The NVIDIA AI Enterprise platform is designed to facilitate the event and deployment of production-grade AI options. It consists of a complete suite of software program parts that may be deployed on numerous {hardware} setups, together with servers, edge methods, and workstations. The platform is split into two important classes: the AI and Information Science software program catalog and the infrastructure software program assortment.
The AI and Information Science software program catalog options NVIDIA NIM microservices and several other frameworks for constructing AI workflows. These parts are containerized for seamless cloud-native deployment, guaranteeing compatibility with numerous cloud service suppliers.
Infrastructure Software program Assortment
The infrastructure software program assortment supplies important parts for supporting AI and information science workloads on accelerated methods. This consists of drivers for GPU, networking, and virtualization, in addition to Kubernetes operators. Moreover, the Base Command Supervisor Necessities is accessible for environment friendly cluster administration.
With the newest replace, the infrastructure software program assortment now helps the H200 NVL GPU, a transfer anticipated to considerably enhance AI utility efficiency and power effectivity.
H200 NVL GPU Enhancements
Revealed on the Supercomputing 2024 convention, the H200 NVL GPU is designed for information facilities requiring lower-power, air-cooled enterprise rack designs. It presents versatile configurations to speed up a variety of AI workloads. The GPU boasts a 1.5x reminiscence improve and a 1.2x bandwidth improve over its predecessor, the NVIDIA H100 NVL, delivering as much as 1.7x quicker inference efficiency.
Assist for the H200 NVL in NVIDIA AI Enterprise is being rolled out in phases. Model 6.0 of the infrastructure assortment, out there now, helps bare-metal purposes and virtualization with GPU pass-through. Model 6.1, anticipated later, will add help for virtualization with vGPU.
Reference Structure and Availability
NVIDIA has additionally launched a reference structure to streamline the deployment and configuration of AI methods. This structure supplies a versatile infrastructure stack for unique tools producers (OEMs) and companions, guaranteeing constant software program parts and adaptable {hardware} configurations.
For enterprises buying servers with the H200 NVL, NVIDIA AI Enterprise is instantly accessible, full with a five-year subscription. Moreover, NVIDIA presents a number of methods to get began, together with free entry to NIM microservices for testing and a 90-day free analysis license. The NVIDIA AI Enterprise Infrastructure Assortment 6.0 is accessible for obtain from the NVIDIA NGC Catalog.
Picture supply: Shutterstock


