Compute Express Link CXL 3.0 is the Exciting Building Block for Disaggregation

  • Thread starter Patrick Kennedy
  • Start date

A-W-A

New Member
May 14, 2022
20
9
3
Perhaps we can get some of those old fashioned CXL 2 memory modules from SK Hynix, Micron and Samsung filtering down into the home lab space now /s
 

i386

Well-Known Member
Mar 18, 2016
3,117
995
113
33
Germany
CXL is lower bandwidth and higher latency with this approach than we typically see from local memory.
I'm wondering if cxl connected memory is faster (bandwidth and latency wise) than nvme/local storage...
 

Sean Ho

seanho.com
Nov 19, 2019
349
137
43
Vancouver, BC
seanho.com
I'm thinking that one big reason these interconnects are blazing fast and low-latency is because they bypass large swaths of the networking stack. As scalability and security become more of concern, are we doomed to reinvent the networking stack?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,353
5,488
113
I'm wondering if cxl connected memory is faster (bandwidth and latency wise) than nvme/local storage...
I was told this week CXL latency is about a socket-to-socket NUMA node hop.