AI Factories · Gigascale Infrastructure

Infrastructure designed around the workload, not the building

Traditional data centres were built for general-purpose compute. Volta's AI Factories are engineered around the specific demands of GPU-scale AI.

Traditional data centres were built for general-purpose compute. Volta's AI Factories are engineered around the specific demands of GPU-scale AI — extreme power density, precision cooling, and the network architecture that high-throughput distributed training requires.

Two gigascale campuses operational at launch, with three to five additional sites confirmed across Europe and the US. Modular expansion built in from day one.

Not retrofitted buildings with GPU racks bolted on. Facilities where every square metre, every cooling loop, and every network path was designed with a single purpose: maximising GPU utilisation at the largest training scale.

Request Access Download Spec Sheet
GPU ArchitectureNVIDIA Blackwell / Rubin
CoolingLiquid-cooled, 200kW+ racks
Network FabricNon-blocking InfiniBand
StorageAll-flash NVMe-oF
Power at Launch1GW+ contracted
Power PipelinePath to 20GW
GPU Count by 2027500,000
Deployment Speed65 days: signing → live
Power ResilienceBehind-the-meter island mode
LocationsEurope + United States

Two gigascale campuses at launch

Operational from May 2026. Three to five additional sites confirmed across Europe and the United States.

Operational · May 2026

Campus Alpha — Europe

Purpose-built gigascale AI Factory campus. Liquid cooling throughout, non-blocking InfiniBand at full scale, behind-the-meter power generation with island mode capability.

200kW+
Rack Density
100ms
Latency Target
99.999%
Availability
Operational · May 2026

Campus Beta — United States

Identical architecture to Campus Alpha. Full InfiniBand mesh, immersion and direct liquid cooling, dedicated power plant. Interconnected with Alpha for global training runs.

200kW+
Rack Density
100ms
Latency Target
99.999%
Availability
Engineering Detail

Engineered for the demands of AI at scale

Extreme Power Density

200kW+ per rack at launch, with infrastructure designed to support higher densities as GPU architecture demands increase. No retrofitting required as workloads scale.

Precision Liquid Cooling

Direct liquid cooling for GPU clusters. Rear-door heat exchangers and immersion cooling where required. Designed for sustained maximum load across the entire rack population simultaneously.

Non-blocking InfiniBand

Full InfiniBand fabric providing zero-congestion, maximum GPU utilisation. RDMA performance that keeps pace with the largest training runs — no network bottleneck at any scale.

Behind-the-Meter Power

No grid dependencies at full scale. Dedicated power generation with island mode capability. No permitting delays, no regulatory risk, no dependency on grid capacity constraints.

All-Flash NVMe-oF Storage

Fabric-attached NVMe-oF storage providing the I/O throughput that large model training requires. Petabyte-scale capacity with the latency and bandwidth profile GPU training demands.

Modular Expansion

Facilities designed for modular expansion from day one. Adding capacity does not require reconfiguration of existing infrastructure — new modules simply extend the existing fabric.

"Power — secured, not sourced. Over 1GW contracted at launch. Behind-the-meter generation, no grid dependencies, no permitting delays."
Volta Power Infrastructure

Deploy on Volta's AI Factories

Access reserved multi-thousand GPU clusters with the physical infrastructure engineered around your workload. From single-tenant training clusters to inference at scale.

Request Access

Co-location & Campus Options

Dedicated campus configurations, co-location within Volta's facilities, and long-term capacity contracts. Custom infrastructure design for unique requirements.

Talk to our team