Traditional data centres were built for general-purpose compute. Volta's AI Factories are engineered around the specific demands of GPU-scale AI.
Traditional data centres were built for general-purpose compute. Volta's AI Factories are engineered around the specific demands of GPU-scale AI — extreme power density, precision cooling, and the network architecture that high-throughput distributed training requires.
Two gigascale campuses operational at launch, with three to five additional sites confirmed across Europe and the US. Modular expansion built in from day one.
Not retrofitted buildings with GPU racks bolted on. Facilities where every square metre, every cooling loop, and every network path was designed with a single purpose: maximising GPU utilisation at the largest training scale.
| GPU Architecture | NVIDIA Blackwell / Rubin |
| Cooling | Liquid-cooled, 200kW+ racks |
| Network Fabric | Non-blocking InfiniBand |
| Storage | All-flash NVMe-oF |
| Power at Launch | 1GW+ contracted |
| Power Pipeline | Path to 20GW |
| GPU Count by 2027 | 500,000 |
| Deployment Speed | 65 days: signing → live |
| Power Resilience | Behind-the-meter island mode |
| Locations | Europe + United States |
Operational from May 2026. Three to five additional sites confirmed across Europe and the United States.
Purpose-built gigascale AI Factory campus. Liquid cooling throughout, non-blocking InfiniBand at full scale, behind-the-meter power generation with island mode capability.
Identical architecture to Campus Alpha. Full InfiniBand mesh, immersion and direct liquid cooling, dedicated power plant. Interconnected with Alpha for global training runs.
200kW+ per rack at launch, with infrastructure designed to support higher densities as GPU architecture demands increase. No retrofitting required as workloads scale.
Direct liquid cooling for GPU clusters. Rear-door heat exchangers and immersion cooling where required. Designed for sustained maximum load across the entire rack population simultaneously.
Full InfiniBand fabric providing zero-congestion, maximum GPU utilisation. RDMA performance that keeps pace with the largest training runs — no network bottleneck at any scale.
No grid dependencies at full scale. Dedicated power generation with island mode capability. No permitting delays, no regulatory risk, no dependency on grid capacity constraints.
Fabric-attached NVMe-oF storage providing the I/O throughput that large model training requires. Petabyte-scale capacity with the latency and bandwidth profile GPU training demands.
Facilities designed for modular expansion from day one. Adding capacity does not require reconfiguration of existing infrastructure — new modules simply extend the existing fabric.
"Power — secured, not sourced. Over 1GW contracted at launch. Behind-the-meter generation, no grid dependencies, no permitting delays."Volta Power Infrastructure
Access reserved multi-thousand GPU clusters with the physical infrastructure engineered around your workload. From single-tenant training clusters to inference at scale.
Request AccessDedicated campus configurations, co-location within Volta's facilities, and long-term capacity contracts. Custom infrastructure design for unique requirements.
Talk to our team