October 1, 2022


Compute Express Link, or CXL, dramatically changes the way memory is used in computer systems. Tutorials at the IEEE Hot Chips conference and at the recent SNIA Storage Developers conference explored how CXL works and how it will change the way we do computing. In addition, recent announcements by Colorado startup IntelliProp about their Omega Memory Fabric chips pave the way for CXL implementation to enable memory pooling and assembly infrastructure.

Initial applications for CXL were for memory expansion for individual CPUs, but CXL will have its greatest impact in sharing many different types of memory technology (DRAM and non-volatile memory) between CPUs. The image below (from the CXL Hot Chips tutorial) shows a different way memory can be shared with CXL.

As Yang Seok Ki, vice president of Samsung Electronics said at SNIA SDC, CXL is an industry-supported cache-coherent interconnect for processors, memory expansion and accelerators. CXL versions 1.0 and 2.0 (which work with PCIe 5.0) were released, and in early August, at the Flash Memory Summit, CXL version 3.0 was released, which works with the faster PCIe 6.0 interconnect. CXL 3.0 also enables multi-level switching and memory fabrics and peer-to-peer direct memory access.

The presentation also outlined how CXL enables version 2.0-enabled intermediate memory accessible to the CPU over a local CXL connection and remote memory through a CXL version 3.0 switched network, as shown below.

The near memory is directly connected to the CPU. Some of the first CXL products available are middle memory expansion products that provide additional memory to the CPU. CXL opens the door to memory sharing by providing similar performance and cost trade-offs as is possible with storage data placement.

IntelliProp has just announced its Omega Memory Fabric chips. The chips include the CXL standard along with the company’s fabric management and network memory (NAM) software. IntelliProp also announced three FPGA (Field-programmable Gate arrays) products that include its Omega Memory Fabric. The company says its memory-dependent innovation will help drive the adoption of pluggable memory, which will lead to significant improvements in power consumption and efficiency in the data center. The company says its Omega Memory Fabric has the following features:

Features Omega Memory Fabric, including CXL standard

  • Dynamic multipathing and memory allocation
  • E2E security using AES-XTS 256 with added integrity
  • Supports peer-to-peer treeless topologies
  • Scaling management for large deployments using multiple fabrics/subnets and distributed managers
  • Direct memory access (DMA) allows efficient movement of data between memory levels and without locking the CPU cores
  • Memory agnostic and up to 10x faster than RDMA

Three FPGA solutions connect CXL devices to CXL hosts and they are adapter, switch and fabric manager. IntelliProp says ASIC solutions will be available in 2023. The company says the solutions connect CXL devices to CXL hosts, allowing data centers to increase performance, scale to tens to thousands of host nodes, consume less power because data travels with fewer hops, and enable mixed use of shared DRAM (fast memory) and shared SCM (slow memory).

CXL is poised to change the way memory is used in computer architectures according to the 2022 Hot Chips guide and talks at SNIA SDC. IntelliProp showcased the company’s Open Memory Fabric technology and three FPGA solutions to enable CXL memory fabrics.



Source link

Leave a Reply

Your email address will not be published.