Retimer as opposed to redriver, assuming you need one. It's not saying you always need a retimer; rather if you're driving long cables or traces, you need a more complicated, PCIe protocol aware retimer, rather than a simple protocol-agnostic redriver as was common with PCIe 3.0.
That website also gives some helpful info for figuring out whether you're likely to need one or not. The end-to-end connection needs to have no more than 36dB of attenuation, and with PCIe 4.0, you're losing about 2.3dB per inch of trace on a standard FR4 PCB. Connectors add a loss of about 1.5dB. So, for a standard AIC, we have 1 connector for the card slot, and let's say two inches to route the traces from the edge of the card to the ASIC, then that means the slot can be a maximum of 13 inches from the CPU. OTOH, if we want to use a NVMe HBA, then we have three or four connectors in the chain (MB to HBA, HBA to cable, cable to drive or backplane, and backplane to drive if a backplane is used), which eats up 6dB of our budget right off the bat. If we assume cables have a similar dB loss per inch as a PCB (which is probably not a great assumption, but I can't find a good number for cables), then we have 13 inches available for CPU->slot, slot->cable connector, and cable->(backplane)->drive. Anything beyond that would need a retimer, which would reset our budget to the full 36dB at the point in the chain where it's inserted.
For PCIe 5 these numbers are much worse, & I expect we'll start seeing optical PCIe links become more common when PCIe 5 HBAs are.
What a SPECTACULAR FIND to have read your post. THANK YOU. I had no idea WTF ReTimer v ReDriver meant before ... though, I was going to buy (which I realize is PCIe 3.0 and thus germane to the thread).
SuperMicro ReTimer -- AOC-SLG3-4E4T
That said, and I apologize -- but I've been on a quest for an answer for about a month of posting questions here & on TrueNAS forums ... as to whether I can or cannot (and if not despite the 80-lanes provided I use cards in slots with lanes) -- then -- what the hell is the limiting factor ?? Because the only things I know of to even think about are CPU lanes vs physical slot mapping, the QPI (which I'd LOVE that to somehow be a "bottleneck" given each is "limited" to 16GB/s, slowest) ... in order to connect 10-12 NVMe (x4) SSDs.
The unit a Dell PowerEdge R730XD
• 2P -- E5-2600v3 or v4
• 10x - 12x NVMe ( x4) drives...
Other than the NVMe drives, SLOG and a Fusion Pool mirror -- just an SFP+ card
(which if it makes a difference & the unit I ordered maps lanes to a mezzanine slot but doesn't include said SFP+ card, I'll just order an SFP+ for it.)
The unit has two candidate PCIe layouts (which I won't know until I receive it unless the seller sent me the service tag):
Option A -- PCIe 3.0 Slots:
Option B -- PCIe 3.0 Slots:
Even version A (1x 16-lane) ... with (1) x16 card + (3) x8 HBA cards is 40-lanes & 10x NVMe drives.
Which'd I thought would leave +3 x8 slots -- worst-case scenario.
Of which, I'd use (2) x8 slots with mirrored AIC Optanes for a mirrored Fusion Pool (for metadata)
And would have plenty of DIMM slots available for a 16GB - 32GB SLOG using NV-DIMMs ... (would 16GB be adequate?)
Dell didn't sell this with more than 4 NVMe drives -- despite their outrageous pricing ...
Intel claims. you can only use this config for up to 4 drives, also ...
But I've used a HighPoint x16 card with my i7-8700K ... which has a total of x16 PCIe lanes ...
And that was with a mirrored pair of NVMe's on the motherboard for the boot drive, a data recovery (PC-3000) card and an SFP+.
I'm just lost as to why everyone (Dell, Intel, etc) shits on the idea of populating it with more NVMe drives ... with 80 lanes??
THANK YOU!