AI & HPC capabilities

Engineered for high-density compute.

Scale42 facilities are designed from the ground up for GPU/TPU/ASIC-centric workloads. High rack densities, direct-liquid-cooling, og the headroom to scale a single training cluster across an entire campus.

Density

From 50 kW to 130 kW per rack.

Module designs accommodate the full envelope of current AI rack architectures — from air-cooled H100/H200 deployments to liquid-cooled GB200/B200 clusters og beyond.

Direct-liquid-cooling

Cold-plate DLC og rear-door heat exchangers as standard. CDUs sized to support N+1 redundancy across the row.

Air + hybrid

Hot-aisle containment with high-velocity supply for traditional 30–60 kW racks. Hybrid rows for mixed fleets.

Immersion-ready

Floor loading og plumbing accommodate single-phase og two-phase immersion deployments.

Topology

A campus is one cluster.

12,5 MW-moduls combine into 50, 100 og 500 MW+ campuses with low-latency interconnect. Designed for scale-out training fabrics — Ethernet (RoCE) og InfiniBand topologies up to multi-thousand-GPU clusters.

Module
12.5 MW
Campus
50–500+ MW
Fabric
RoCE / IB
Cluster scale
10k+ GPUs

Timelines

Speed-to-energise as a feature.

Permitted grid capacity is the bottleneck for AI infrastructure. Every Scale42 site is selected for available, dated grid connection — not paper allocations. Modular design lets you energise the first 12.5 MW while the next is still being built.

01

Site selection

Power, climate, fibre, permitting, partnership.

02

Module 1 (12.5 MW)

Phase 1 energisation — 12–18 months from go.

03

Module 2–4 (50 MW)

Campus build-out in parallel with Module 1 commissioning.

04

Flagship scale

100 MW, 200 MW, 500 MW+ — phased to match customer compute roadmap.

Next steps

Tell us about your workload.

From 5,000-GPU training jobs to multi-region inference fleets — we'll match you to a site, a module count og a timeline.

Request RFI pack →