According to DCD, CoreSite has secured managed services provider STN as a customer for its CH2 Chicago data center, where STN will deploy GPU One—a private cloud platform featuring more than 1,500 liquid-cooled Nvidia B200 GPUs across 24 racks launching in April 2025. The deployment represents a significant expansion of STN’s relationship with CoreSite, which began in 2019 with a single rack in Santa Clara and has since grown to include Los Angeles and now Chicago facilities. CoreSite is supporting more than 100kW per rack with liquid cooling for STN, with the company noting that more than 90 percent of customers in the multi-tenant computer room where STN resides at CH2 are liquid-cooled. STN CEO Sabur Mian emphasized that CoreSite’s leadership in liquid-cooled, high-density deployments was crucial to the partnership, allowing STN’s customers to focus on AI model training and inference while STN and CoreSite ensure uptime. This major deployment signals a strategic shift in cloud computing infrastructure toward specialized AI capabilities.
Table of Contents
The Liquid Cooling Imperative
The scale of this deployment—1,500 GPUs across 24 racks at over 100kW per rack—represents a fundamental shift in data center design that traditional air cooling cannot support. At these power densities, liquid cooling isn’t just an efficiency improvement; it’s an operational necessity. What’s particularly telling is that 90% of customers in STN’s section of the CH2 facility are already using liquid cooling, indicating this isn’t an isolated trend but rather the new standard for high-performance computing environments. The industry is rapidly approaching the physical limits of air cooling, and deployments like this validate liquid cooling as the only viable path forward for AI infrastructure at scale.
Chicago’s Emerging AI Infrastructure Role
Chicago represents a strategic middle ground in the AI infrastructure landscape, offering several advantages that traditional tech hubs cannot match. The city’s central geographic location provides lower latency connections to both coasts while benefiting from robust fiber connectivity and competitive power costs. More importantly, Chicago’s climate offers natural cooling advantages that reduce operational expenses for high-density deployments. This move follows a broader pattern of AI infrastructure decentralization, where companies are seeking locations that balance technical requirements with economic efficiency rather than simply following historical tech concentration patterns.
Broader Market Implications
The timing of this April 2025 launch is particularly significant, as it positions STN to capture demand from the next generation of AI models that will require even more computational resources than current systems. The Nvidia B200 GPUs represent the cutting edge of AI acceleration, and deploying them at this scale suggests STN is preparing for inference workloads that dwarf current requirements. This also reflects a maturation of the AI infrastructure market—where early experimentation is giving way to planned, scalable deployments with clear business cases. The mention that the cluster will partly support robotics startup Skild Brain indicates we’re seeing convergence between different AI domains, where the same infrastructure can serve multiple advanced computing needs.
Strategic Positioning in Evolving Market
STN’s evolution from a single rack in 2019 to multi-facility deployment today mirrors the broader trajectory of AI infrastructure providers. The company appears to be carving out a niche between hyperscale cloud providers and bare metal offerings, focusing specifically on AI builders who need performance-optimized environments without managing the underlying infrastructure. This “private cloud for AI builders” approach addresses a genuine gap in the market—organizations that have outgrown experimental phases but aren’t ready for hyperscale commitments. However, the success of this model depends critically on STN’s ability to maintain competitive pricing while delivering the specialized support that AI workloads demand.
Future Outlook and Challenges
While impressive in scale, this deployment faces several challenges beyond the obvious technical hurdles. The rapid pace of GPU innovation means that B200 systems deployed in 2025 may face competition from next-generation hardware shortly after launch. Additionally, the concentration of high-value AI infrastructure in single locations creates both operational and security concerns that must be addressed through robust redundancy and protection measures. The success of this venture will depend not just on technical execution but on STN’s ability to attract and retain customers in an increasingly competitive AI infrastructure market where hyperscale providers are aggressively expanding their own AI offerings.
Related Articles You May Find Interesting
- Government Shutdown Casts Shadow on ServiceNow’s AI-Driven Growth
- Amazon’s Cloud Crossroads: AI Deals and Outages Test AWS Dominance
- Samsung’s Browser Gamble: Why Mobile-First Strategy Hits PC Market
- FAA Greenlights Chromalloy’s CFM56 Blade in $100M MRO Power Play
- Google’s Play Store Concession: A Temporary Crack in the Walled Garden
 
			 
			 
			