Strong performance for the first experimental RISC-V supercomputer

0

A European team of university students has tinkered with the first RISC-V supercomputer capable of displaying balanced power consumption and performance.

More importantly, it demonstrates a potential way forward for RISC-V in high-performance computing and, by proxy, a chance for Europe to completely shed dependence on US chip technologies.

The ‘Monte Cimone’ cluster won’t be crunching massive weather simulations or the like anytime soon, as it’s just an experimental machine. That said, it goes to show that the performance sacrifices for lower power envelopes aren’t necessarily as dramatic as many think.

The six-node cluster, built by folks from the University of Bologna and CINECA, Italy’s largest supercomputing center, was part of a wider student cluster competition to showcase various performance elements HPC beyond simple floating point capability. The cluster creation team, called NotOnlyFLOPs, wanted to establish the power-performance profile of RISC-V when using SiFive’s Freedom U740 system-on-chip.

This 2020-era SoC has five 64-bit RISC-V processor cores – four U7 application cores and one S7 system management core – 2MB L2 cache, Gigabit Ethernet, and various peripheral and hardware controllers. It can operate up to about 1.4 GHz.

Here is an overview of the components as well as the feeds and speeds of Monte Cimone:

  • Six dual-card servers in a 4.44 cm (1U) high, 42.5 cm wide, 40 cm deep form factor. Each board follows the industry standard Mini-ITX form factor (170mm by 170mm);
  • Each board includes a SiFive Freedom U740 SoC and 16 GB of 64-bit DDR memory running at 1866 s MT/s, plus a PCIe Gen 3 x8 bus running at 7.8 GB/s, a gigabit Ethernet port and USB 3.2 Gen interfaces. 1;
  • Each node has an M.2 M-key expansion slot occupied by a 1TB NVME 2280 SSD used by the operating system. A microSD card is inserted into each map and used for UEFI boot;
  • Two 250W power supplies are integrated inside each node to support hardware and future accelerators and PCIe expansion cards.

A top view of each node, showing the two SiFive Freedom SoC boards

Freedom SoC motherboards are basically HiFive Unmatched cards from SiFive. Two of the six compute nodes are equipped with an Infiniband Host Channel Adapter (HCA), as most supercomputers use it. The goal was to deploy Infiniband at 56 Gb/s to allow RDMA to maximize possible I/O performance.

It’s ambitious for a young architecture and it didn’t happen without a few hiccups. “PCIe Gen 3 lanes are currently supported by the vendor,” the cluster team wrote.

“Preliminary experimental results show that the kernel is able to recognize the device driver and mount the kernel module to manage Mellanox’s OFED stack. We are unable to use the full RDMA capabilities of the HCA due to yet to be identified incompatibilities between the software stack and the kernel driver. Nonetheless, we successfully ran an IB ping test between two cards and between a card and an HPC server, showing that full Infiniband support might be achievable. This is currently a feature under development.

The HPC software stack turned out to be easier than expected. “We ported to Monte Cimone all the essential services needed to run HPC workloads in a production environment, namely NFS, LDAP and the SLURM task scheduler. Porting all the necessary software packages to RISC-V was relatively straightforward, so we can safely say that there are no barriers to exposing Monte Cimone as a computing resource in an HPC installation,” the team noted.

While a remarkable architectural addition to the supercomputing ranks, a RISC-V cluster like this is unlikely to make it to the list of the top 500 fastest systems in the world. Its design specs are like a low-powered workhorse, not a floating-point monster.

As noted by the development team in their detailed report the description of the system, “Monte Cimone is not intended to achieve strong floating-point performance, but it was built with the intention of ‘preparing the pipe’ and exploring the challenges of integrating a RISC-V multi-cluster -nodes capable of delivering a production HPC stack including interconnect, storage, and power monitoring infrastructure on RISC-V hardware.

E4 Computer Engineering served as integrator and partner on the “Monte Cimone” cluster, which will eventually pave the way for further testing of the RISC-V platform itself as well as its ability to Play well with other architectures, an important piece as we are unlikely to see an exascale-class RISC-V system in the next few years at least.

According to E4, “Cimone enables developers to test and validate scientific and technical workloads in a rich software stack, including development tools, libraries for messaging programming, BLAS, FFT, drivers for HS networks and I/O devices. The goal is to achieve a future-ready position capable of approaching and exploiting the features of the RISC-V ISA for scientific and engineering applications and workloads in an operational environment.

Dr. Daniele Cesarini, HPC specialist at CINECA: “As a supercomputing center, we are very interested in RISC-V technology to support the scientific community. We are excited to contribute to the RISC-V ecosystem by supporting the installation and tuning of widely used scientific codes and mathematical libraries to advance the development of high-performance RISC-V processors. We believe that Monte CIMONE will be the harbinger of the next generation of supercomputers based on RISC-V technology and we will continue to work in synergy with E4 Computer Engineering and the University of Bologna to prove that RISC-V is ready for stay in the market. the shoulder of the HPC giants.

There are many RISC-V grants in Europe and in terms of projects, although the fruits of this work may take years to see. Now even Intel is looking to the future of supercomputing. It’s quite a RISC-Y gamble (you saw it coming), but with few native architectural options in Europe, at least picking an early winner is easy.

Subscribe to our newsletter

With highlights, analysis and stories of the week straight from us to your inbox, with nothing in between.
Subscribe now

Share.

About Author

Comments are closed.