HPE, Stephen Hawking’s COSMOS to Investigate the Early Universe

This site may earn affiliate commissions from the links on this page. Terms of use.

Hewlett Packard Enterprise has announced a new collaborative effort with the Stephen Hawking Centre for Theoretical Cosmology. The new effort, which leverages HP’s Superdome Flex servers, is a step forward for the COSMOS project, which has existed in one form or another since 1997. Back then, SGI and Intel were both major players, though HPE seems to be tackling the upgrades and improvements on its own so far.

“Our COSMOS group is working to understand how space and time work, from before the first trillion trillionth of a second after the Big Bang up to today,” said Hawking, the Tsui Wong-Avery Director of Research in Cambridge’s Department of Applied Mathematics and Theoretical Physics, in a statement. “The recent discovery of gravitational waves offers amazing insights about black holes and the whole Universe. With exciting new data like this, we need flexible and powerful computer systems to keep ahead so we can test our theories and explore new concepts in fundamental physics.”

HPE says this new Superdome will join an existing HPE Apollo supercomputer, but it’ll scarcely be handling the heavy lifting all on its own. COSMOS already has an SGI Altix UV2000, Cosmos2 (1536 Xeon E5-4650L), and Cosmic (288x Intel Xeon E5-4650L, along with 24 Intel Xeon Phi MICs (5110P).

group_shot_large

The Centre for Theoretical Cosmology’s staff.

HPE’s Superdome Flex is unquestionably a beast; the machine can scale from 4-32 sockets and supports 768GB-48TB of memory. Exactly what benefits this will deliver to the Cosmos project is unknown. While HPE makes much of its in-memory computing (meaning it holds huge datasets entirely in DRAM), it’s not clear how much of a specific benefit this provides. NextPlatform has done a good deep dive into in-memory computing, pointing out that while it’s indisputably good for solving certain kinds of problems, it’s not the panacea HP markets it as being: “The glib way to say it is that in-memory does work if everything fits, and when it doesn’t fit, it really doesn’t fit and you have got a problem,” NextPlatform writes.

superdomex

Superdome Flex. Credit: HPE

DRAM also costs much more per byte than conventional storage and draws considerably more power than NAND flash or magnetic disks. It’s also not clear if the COSMOS project can make use of the platform’s in-memory computing if other systems on the same network aren’t optimized to do so as well. Then again, one of the major differences between an HPC cluster and a conventional platform is that it’s often worth it to perform extensive customization on a system for HPC workloads, thanks to the long-term improvements such optimization can deliver.

The new Superdome Flex system won’t be limited to COSMOS, but will also be available to numerous other research departments.

“High performance computing has become the third pillar of research and we look forward to new developments across the mathematical sciences in areas as diverse as ocean modeling, medical imaging and the physics of soft matter,” said Professor Nigel Peake, Head of the Cambridge Department of Applied Mathematics and Theoretical Physics.

Top image: NASA/JPL-Caltech

Let’s block ads! (Why?)

ExtremeTechExtreme – ExtremeTech