GPU NODES

AMD Instinct MI300X

AMD Instinct™ MI300 Series accelerators are uniquely suited to power even the most demanding AI and HPC workloads, delivering exceptional compute performance, massive memory density, high-bandwidth memory, and support for specialised data formats.

Key Facts

7.2X FASTER
On throughput and latency
AMD Instinct MI300X GPUs with GEMM tuning improves throughput and latency by up to 7.2x.
304 CUs
GPU Compute Units
Allowing for large parallelisation of AI workloads.
192 GB
HBM3 Memory
2.4x the memory capacity compared to competing accelerators.
5.3 TB/s
Max theoretical memory bandwidth
1.6x the peak theoretical memory bandwidth compared to competing accelerators.

Our Nodes

We take advantage of the AMD Instinct MI300X platform: 8 fully connected MI300X GPU OAM modules in an industry-standard OCP design via 4th Gen AMD Infinity Fabric™ links, providing up to 1.5TB of HBM3 capacity for low-latency AI processing.

This ready-to-deploy platform can accelerate time to market and reduce development costs by adding MI300X accelerators to existing AI server and rack infrastructure.
AMD Specifications
AMD Instinct MI300X platform - 8 fully connected GPUs
AMD Instinct MI300X technology

AMD Instinct Series

AMD Instinct MI300 Series accelerators are built on the AMD CDNA™ 3 architecture, which features Matrix Core Technologies and support for a wide range of precision features — from the highly efficient INT8 and FP8 (including sparsity support for AI) to the most demanding FP64 for HPC.

AMD ROCm™ software includes a broad set of programming models, tools, compilers, libraries, and runtimes for AI models and HPC workloads targeting AMD Instinct accelerators.
Read More

Get access to a fully integrated suite of AI services and compute

Reduce costs, grow revenue, and run your AI workloads more efficiently on a fully integrated platform. Whether you're using Nscale's built-in AI/ML tools or your own, our platform is designed to simplify the journey from development to production.

Serverless
Marketplace
Training
Inference
GPU nodes
Nscale's Datacentres
Powered by 100% renewable energy
LLM Library
Pre-configured Software
Pre-configured Infrastructure
Job Management
Job-scheduling
Container Orchestration
Optimised Libraries
Optimised Compilers and Tools
Optimised Runtime

Access thousands of GPUs tailored to your requirements.