GPU NODES

NVIDIA H200

The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities.

Key Facts

1.9X FASTER
Llama2 70B Inference
The H200 provides up to 1.9X faster inference on Llama2 70B than the H100.
141 GB
HBM3e Memory
The first GPU to offer 141 gigabytes (GB) of HBM3e memory.
4.8 TB/s
Memory Bandwidth
1.4X more memory bandwidth than the previous H100 GPUs.
110X FASTER
HPC Performance
H200’s higher memory bandwidth leads to 110X faster performance compared to CPUs.

Our Nodes

Based on the NVIDIA Hopper™ architecture, the NVIDIA H200 is the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s) —that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth.
NVIDIA Specifications
Nvidia H200 NVL
Nvidia H100 NVL

About NVIDIA H200

Based on the NVIDIA Hopper™ architecture, the NVIDIA H200 is the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s) —that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memory bandwidth. The H200’s larger and faster memory accelerates generative AI and LLMs, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.
Learn More

Get access to a fully integrated suite of AI services and compute

Reduce costs, grow revenue, and run your AI workloads more efficiently on a fully integrated platform. Whether you're using Nscale's built-in AI/ML tools or your own, our platform is designed to simplify the journey from development to production.

Serverless
Marketplace
Training
Inference
GPU nodes
Nscale's Datacentres
Powered by 100% renewable energy
LLM Library
Pre-configured Software
Pre-configured Infrastructure
Job Management
Job-scheduling
Container Orchestration
Optimised Libraries
Optimised Compilers and Tools
Optimised Runtime

Access thousands of GPUs tailored to your requirements.