Compute Express Link CXL 3.0 is the Exciting Building Block for Disaggregation

  • Thread starter Patrick Kennedy
  • Start date
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

A-W-A

New Member
May 14, 2022
27
9
3
Perhaps we can get some of those old fashioned CXL 2 memory modules from SK Hynix, Micron and Samsung filtering down into the home lab space now /s
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
CXL is lower bandwidth and higher latency with this approach than we typically see from local memory.
I'm wondering if cxl connected memory is faster (bandwidth and latency wise) than nvme/local storage...
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
I'm thinking that one big reason these interconnects are blazing fast and low-latency is because they bypass large swaths of the networking stack. As scalability and security become more of concern, are we doomed to reinvent the networking stack?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I'm wondering if cxl connected memory is faster (bandwidth and latency wise) than nvme/local storage...
I was told this week CXL latency is about a socket-to-socket NUMA node hop.