SNIA Summit on Persistent Memory and Computing Storage Part 1


SNIA held its Persistent Memory and Computational Storage Summit, virtual this year, like last year. The Summit explored some of the latest developments in these areas. Let’s explore some of the ideas from this virtual conference from day one.

Dr. Yang Seok, vice president of the Memory Solutions Lab at Samsung, talked about the company’s SmartSSD. He argued that computing storage devices, which offload processing from processors, can reduce power consumption and thus provide a green computing alternative. He pointed out that the energy consumption of data centers has remained stable at around 1% since 2010 (in 2020 it was 200-250 TWh per year) due to technological innovations. Nevertheless, there is a difficult step to reduce greenhouse gas emissions from data centers by 53% between 2020 and 2030.

Computer storage SSDs (CSDs) can be used to offload data from a CPU to free up the CPU for other tasks or to speed up local processes closer to the stored data. This local processing is done with less power than a CPU, can be used for data reduction operations in the storage device (to enable higher storage capacities and more efficient storage, to avoid data movement and can be part of a virtualization environment CSDs appear to be competitive and use less power for I/O-intensive tasks In addition, the computing power of CSDs increases with the number of CSDs used.

Samsung has announced its first SmartSSD in 2020 and announces that its next generation will be available soon, see figure below. The next generation will allow for greater customization of what the CSD’s processor can do, allowing its use in more applications and possibly saving power for many processing tasks.

Stephen Bates from Eidetic and Kim Malone from Intel talked about the new standard developments for NVMe computing. One such addition to the command set for calculation programs is the calculation namespace. An entity within an NVMe subsystem that is capable of running one or more programs, may have asymmetric access to subsystem memory, and may support a subset of all types possible programs. The conceptual image below gives an idea of ​​how it works.

Both device-defined and downloadable programs are supported. Device-defined programs are fixed programs provided by the manufacturer or various features implemented by the device that can be called programs such as compression or decryption. Downloadable programs are loaded into the Computer Programs namespace by the host.

Andy Rudoff from Intel provided an update on persistent memory. He talked about developments along a timeline. He said that in 2019 Intel Optane Pmem was generally available. The image below shows Intel’s approach to connecting to the memory bus with Optane Pmem.

Note that Direct Memory Access (DAX) is key to this use of Optane PMem. The following image shows a timeline of PMem-related developments since 2012.

Andy reviewed several customer use cases for Intel’s Optane PMem, including

g Oracle Exadata with PMem access via RDMA, Tencent Cloud and Baidu. He also discusses future PMem directions, especially paired with CXL. These include accelerating AI/ML and data-centric applications with temporal caching and persistent metadata storage.

Jinpyo Kim from VMware and Michael Mesnier from Intel Labs talked about computing storage in a virtualized environment, in collaboration with MinIO. Some of the featured uses included cleaning data (reading data and detecting accumulated errors) from a MinIO storage stack and a Linux filesystem. They found that doing this computation near stored data was 50% to 18 times more scalable, depending on the speed of the link (the process is read-intensive). VMware, in prototype research with UC Irvine, did a project on near storage log analysis and found an order of magnitude better query performance compared to using software alone – this capability is ported on a Samsung SmartSSD.

VMware also used NGD computing storage devices to run a Greenplum MPP database. They have done a lot of work on virtualizing CSD (they call it vCSD) on vSphere/vSAN, which makes it easier to share hardware accelerators and migrate a vCSD between compatible hosts. CSDs can be used to de-aggregate cloud-native applications and offload storage-intensive functions. The figure below shows collaborative efforts to use CSDs between MinIO, VMware, and Intel.

Meta’s Chris Petersen talked about AI memory at Meta. Meta uses AI for many applications and at scale, from the data center to the edge. Since AI workloads evolve so rapidly, they require more vertical integration, from software requirements to hardware design. A considerable portion of capacity requires high BW accelerator memory, but inference has most of its capacity at low bandwidth compared to training. Additionally, inference has a tight latency requirement. They found that a level of memory beyond HBM and DRAM can be leveraged, especially for inference.

They found that for software defined memory backed by SSDs, they had to use SCM (SSD Optane) SSDs. The use of faster SSDs (SCM) reduced the need to scale up and therefore reduce power. The figure below shows Meta’s view of memory levels required for AI applications, showing that higher CXL latency, performance and more memory/storage capacity will be required. to achieve optimized performance, cost and efficiency.

Arthur Sainio and Pekon Gupta of SMART Modular Technologies discussed the use of NVDIMM-N in DDR5 and CXL compatible applications. These NVDIMM-Ns include a battery backup that allows the module’s DRAM memory to be written back to flash memory in the event of a power outage. This technology is also developed for the CXL application with an NV-XMM specification of devices that have an integrated power source for backup power and operate with the standard programming model for CXL Type-3 devices. The figure below shows the form factors of these devices.

In addition to these discussions, David Eggleston of Intuitive Cognitive Consulting moderated a panel discussion with the day’s speakers and there was a Birds of a Feather session on computer storage at the end of the day moderated by Scott Shadley of NGD and AMD’s Jason Molgaard.

Samsung, Intel, Eideticom, VMware, meta and SMART Modular gave insightful presentations on persistent memory and computational storage on day one of the SNIA 2022 Persistent Memory and Computational Storage Summit.


About Author

Comments are closed.