Enterprise Cluster Leasing

Dedicated GPU clusters built for production workloads. Pre-configured hardware, high-speed networking, and white-glove support. Deploy in days, not months.

inference

Inference Cluster

Optimized for high-throughput LLM inference with AMD Instinct MI300X accelerators. ROCm-native, zero-config deployment.

$18,500/month

8x

MI300X

128

vCPUs

1 TB

RAM

100 Gbps

Network

  • 8x MI300X (192 GB HBM3 each)
  • ROCm 6.x pre-installed
  • vLLM + TGI optimized
  • 100 Gbps InfiniBand
  • 24/7 SRE support
  • Auto-scaling inference endpoints
training

Training Cluster

Purpose-built for large-scale model training. MI325X with expanded HBM3E memory for massive parameter counts.

$42,000/month

16x

MI325X

256

vCPUs

2 TB

RAM

200 Gbps

Network

  • 16x MI325X (256 GB HBM3E each)
  • Distributed training ready
  • DeepSpeed + FSDP support
  • 200 Gbps RDMA fabric
  • Checkpoint storage included
  • Dedicated network partition
heavy training

Heavy Training Cluster

Maximum compute density with next-gen MI355X. For frontier model training and research at scale.

$128,000/month

32x

MI355X

512

vCPUs

4 TB

RAM

400 Gbps

Network

  • 32x MI355X (288 GB HBM3E each)
  • CDNA 4 architecture
  • 9.2 TB aggregate HBM
  • 400 Gbps ultra-low-latency fabric
  • Dedicated cooling infrastructure
  • White-glove onboarding
scale out

Scale-out Cluster

Homogeneous MI300X footprint for elastic training and inference. Single stack, ROCm end to end, predictable performance.

$52,000/month

16x

MI300X

256

vCPUs

2 TB

RAM

200 Gbps

Network

  • 16x MI300X (192 GB HBM3 each)
  • RDMA over InfiniBand
  • Workload-aware scheduling
  • Unified monitoring dashboard
  • PyTorch and JAX tuned on ROCm
  • Custom SLA available

Need a Custom Configuration?

We build clusters to spec. Tell us your workload requirements — GPU type, count, networking, storage — and we will architect a solution.