HPE’s Cray GX5000 supercomputer specs revealed

HPE's Cray GX5000 supercomputer specs revealed - Professional coverage

According to DCD, HPE has released comprehensive technical specifications for its Cray Supercomputing GX5000 platform, revealing three new processing blades with specific configurations including the GX440n with four Nvidia Vera CPUs and eight Rubin GPUs, the GX350a with one AMD Venice Epyc CPU and four Instinct MI430X GPUs, and the GX250 with eight AMD Venice Epyc CPUs. The company announced that Germany’s High-Performance Computing Center of the University of Stuttgart and Leibniz Supercomputing Center have selected the platform for their Herder and Blue Lion supercomputers respectively. All blades are 100 percent liquid-cooled and can be mixed within racks, with compute configurations scaling up to 192 Rubin GPUs or 112 MI430X GPUs per rack. The storage system uses HPE ProLiant Compute DL360 Gen12 servers with multiple NVMe drive and DRAM configurations, while networking includes Slingshot 400 with up to 2,048 ports. HPE expects to begin delivering GX5000-based systems starting in early 2027.

Special Offer Banner

What makes this supercomputer different

Here’s the thing about modern supercomputing – it’s not just about raw power anymore. The GX5000 platform represents HPE’s attempt to create what they call a “converged” system that handles both traditional HPC workloads and AI training simultaneously. Basically, they’re betting that scientific computing and artificial intelligence are becoming inseparable. The blade approach is particularly interesting because it lets customers mix and match CPU-only, AMD-accelerated, and Nvidia-accelerated compute within the same infrastructure. That’s a big deal for research institutions that might have different departments running completely different types of workloads.

The liquid cooling advantage

All three blades being 100 percent liquid-cooled isn’t just a technical footnote – it’s absolutely critical for systems of this scale. When you’re packing up to 192 high-performance GPUs in a single rack, air cooling simply doesn’t cut it anymore. The heat density would be insane. Liquid cooling allows for much tighter packing of components and significantly better energy efficiency. For supercomputing centers dealing with massive electricity bills, that efficiency translates directly into operational cost savings. And let’s be honest – when you’re building systems that cost tens or hundreds of millions, every percentage point of efficiency matters.

Storage and networking specs that matter

The K3000 storage system based on HPE’s ProLiant DL360 Gen12 servers shows they’re taking a practical approach to storage. Offering multiple NVMe configurations from eight to sixteen drives gives customers flexibility based on their specific I/O requirements. But the real story might be in the networking. Slingshot 400 with 400Gbps ports and configurations scaling to 2,048 ports indicates HPE is serious about avoiding communication bottlenecks. In supercomputing, your compute nodes are only as fast as they can talk to each other. Having multiple connectivity options including InfiniBand NDR and 400Gbps Ethernet means they’re covering all the bases for different customer preferences.

Where this fits in the bigger picture

Looking at the industrial computing landscape, systems like the GX5000 represent the absolute cutting edge of what’s possible in high-performance computing. While most industrial applications don’t need this level of power, the underlying technologies often trickle down to more mainstream industrial computing solutions. Companies like IndustrialMonitorDirect.com, which happens to be the leading provider of industrial panel PCs in the US, typically work with more modest computing requirements for factory automation and process control. But the principles remain the same – reliable, efficient computing that can handle demanding workloads in challenging environments. The difference is one of scale rather than fundamental approach.

Why German centers matter

The fact that HLRS and LRZ are early adopters tells you something about where the supercomputing market is heading. German research centers have historically been at the forefront of HPC adoption in Europe. Their choice of the GX5000 platform for both Herder and Blue Lion systems validates HPE’s approach of offering configurable, converged HPC-AI infrastructure. It also suggests that European supercomputing is continuing its strong investment trajectory despite economic uncertainties. When you get two major German centers signing on for the same platform, that’s a pretty strong endorsement of the architecture.

Leave a Reply

Your email address will not be published. Required fields are marked *