AI & HPC capabilities
Scale42 facilities are designed from the ground up for GPU/TPU/ASIC-centric workloads. High rack densities, direct-liquid-cooling, og the headroom to scale a single training cluster across an entire campus.
Density
Module designs accommodate the full envelope of current AI rack architectures — from air-cooled H100/H200 deployments to liquid-cooled GB200/B200 clusters og beyond.
Cold-plate DLC og rear-door heat exchangers as standard. CDUs sized to support N+1 redundancy across the row.
Hot-aisle containment with high-velocity supply for traditional 30–60 kW racks. Hybrid rows for mixed fleets.
Floor loading og plumbing accommodate single-phase og two-phase immersion deployments.
Topology
12,5 MW-moduls combine into 50, 100 og 500 MW+ campuses with low-latency interconnect. Designed for scale-out training fabrics — Ethernet (RoCE) og InfiniBand topologies up to multi-thousand-GPU clusters.
Timelines
Permitted grid capacity is the bottleneck for AI infrastructure. Every Scale42 site is selected for available, dated grid connection — not paper allocations. Modular design lets you energise the first 12.5 MW while the next is still being built.
Power, climate, fibre, permitting, partnership.
Phase 1 energisation — 12–18 months from go.
Campus build-out in parallel with Module 1 commissioning.
100 MW, 200 MW, 500 MW+ — phased to match customer compute roadmap.
Next steps
From 5,000-GPU training jobs to multi-region inference fleets — we'll match you to a site, a module count og a timeline.
Request RFI pack →