EQS-News: Nscale and Microsoft Announce Collaboration with NVIDIA and Caterpillar to Deliver 1.35GW of NVIDIA Vera Rubin NVL72 GPUs at Flagship AI Factory Campus in West Virginia – boerse.de
A bold new alliance aims to reshape enterprise AI by building a massive, self-contained compute campus in West Virginia. Nscale Energy & Power teams up with Microsoft to deploy 1.35 gigawatts of cutting-edge AI hardware based on NVIDIA Vera Rubin NVL72 GPUs and the DSX AI Factory blueprint, with the Monarch Compute Campus serving as the flagship site for this global deployment.
Strategic Vision and Deployment Scale
The collaboration centers on positioning Nscale as a premier, globally recognized deployment partner for NVIDIA’s Vera Rubin architecture. The Monarch Campus, located on a expansive tract in Mason County, West Virginia, will host a multi-stage rollout designed to support large-scale AI training and inference. Plans call for phased capacity additions beginning in the latter part of 2027, scaling toward a widely impactful compute platform that could become one of the largest dedicated AI infrastructures worldwide.
In parallel, Nscale expanded its portfolio with the acquisition of a major energy and power platform, consolidating assets that include a campus capable of hosting substantial AI workloads and a microgrid designed to operate with resilience and independence from conventional grids. The initiative is framed as a long-term compute services program paired with strategic data-center leasing, signaling a comprehensive approach to sustaining high-intensity AI workloads on a broad scale.
Technology and Architecture
The project leverages the latest generation of NVIDIA Vera Rubin GPUs, implemented within the DSX AI Factory reference design. This blueprint is intended to streamline deployment, optimize cooling and power efficiency, and provide a scalable foundation for training, fine-tuning, and real-time inference across diverse AI models.
Power, Reliability and On-Site Microgrid
A cornerstone of the plan is robust, on-site power generation. Through a close collaboration with Caterpillar, the initiative will deploy G3500-series natural gas generator sets to deliver a total of up to two gigawatts of power by the first half of 2028. This scale enables a highly resilient compute environment that can operate with minimal disruption to external electrical grids, while offering the potential to export power back to the wider grid when conditions permit.
The Monarch Campus is being designed with a dedicated, state-certified AI microgrid that can support rapid deployment timelines. By generating power on-site, the project aims to minimize the impact on local ratepayers and to provide a predictable, high-capacity energy supply for AI workloads. The approach also anticipates future expansion, with the architecture capable of accommodating significant increases in both capacity and sophistication of AI infrastructure.
Connectivity, Location and Latency
Strategic positioning near major data-center corridors and AI hubs is a key part of the design. The campus is planned to feature high-speed fiber connections to prominent AI and cloud infrastructure centers, ensuring low-latency access for training and inference tasks. Proximity to major markets and existing AI ecosystems in the region is expected to reduce data transit times and support rapid iteration cycles for developers and enterprises.
Environmental and Community Considerations
Beyond performance, the project emphasizes sustainable design and local collaboration. Environmental resources are being integrated into the campus planning, with efforts to minimize water usage and to implement efficient cooling solutions. The team is engaging with state and local partners to maximize community benefits, including potential job creation, workforce development, and long-term infrastructure improvements tied to the region’s economic growth.
In line with forward-looking sustainability goals, the campus explores carbon-management options, including potential sequestration opportunities that align with broader decarbonization objectives. The on-site microgrid model further supports resilience and reliability while seeking to reduce environmental impact relative to traditional centralized power systems.
About the Initiative
The collaboration represents a convergence of advanced hardware, energy infrastructure, and digital strategy aimed at delivering AI capabilities at industrial scale. By combining Nscale’s engineering approach to hyperscale AI infrastructure with Microsoft’s platform capabilities and NVIDIA’s Vera Rubin GPUs, the project seeks to push the boundaries of what is feasible in large-scale AI training and deployment, while maintaining a strong emphasis on energy efficiency, grid resilience, and regional benefits.
About Nscale
Nscale is building a globally scalable platform for AI infrastructure, delivering vertically integrated solutions and modular data-center capabilities designed to support enterprise AI from training to inference. Through a combination of innovative engineering and strategic partnerships, Nscale aims to provide the foundation for next-generation AI workloads at scale across Europe and North America.