GPU NODES

NVIDIA GB200

NVIDIA Grace™ is designed for a new type of data center—one that processes mountains of data to produce intelligence with maximum energy efficiency. These data centers run diverse workloads like AI, data analytics, hyperscale cloud applications, and high-performance computing (HPC).

Key Facts

30X FASTER
Llama 3.0 Inference
GB200 NVL72 is 30X faster for Inference 
vs. NVIDIA H100 Tensor Core GPU.
9X FASTER
Vector Database Search
GB200 NVL2 is 9X faster at vector database search than the H100.
18X FASTER
Data Processing
GB200 NVL72 is 18X faster at processing data than Intel Xeon 8480+.
25X EFFICIENCY
Energy Efficiency
GB200 NVL2 is 25X more energy efficient than the H100.

Our Nodes

Take advantage of NVIDIA’s GB200 NVL72, connecting 36 Grace CPUs and 72 Blackwell GPUs in a rack-scale design. The GB200 NVL72 is a liquid-cooled, rack-scale solution that boasts a 72-GPU NVLink domain that acts as a single massive GPU and delivers 30X faster real-time trillion-parameter LLM inference.
NVIDIA Specifications

About

NVIDIA Grace™ is designed for a new type of data center—the AI factory. NVIDIA GB200 Grace Blackwell Superchip combines two NVIDIA Blackwell Tensor Core GPUs and a Grace CPU and can scale up to the GB200 NVL72, a massive 72-GPU system connected by NVIDIA® NVLink®, to deliver 30X faster real-time inference for large language models.
Learn More

Get access to a fully integrated suite of AI services and compute

Reduce costs, grow revenue, and run your AI workloads more efficiently on a fully integrated platform. Whether you're using Nscale's built-in AI/ML tools or your own, our platform is designed to simplify the journey from development to production.

Serverless
Marketplace
Training
Inference
GPU nodes
Nscale's Datacentres
Powered by 100% renewable energy
LLM Library
Pre-configured Software
Pre-configured Infrastructure
Job Management
Job-scheduling
Container Orchestration
Optimised Libraries
Optimised Compilers and Tools
Optimised Runtime

Access thousands of GPUs tailored to your requirements.