Experiences with PCIe to SFF-8643 adapters | Retrofitting Servers with U2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

UnknownPommes

Active Member
Aug 28, 2022
101
32
28
Whats your opinion on those kinda cheap pcie to 8643 or oculink cards like this one?

They are only 30$ and would allow me to connect 4 u2 drives (probably raid z1 or 10) to some of my supermicro x10 and x11 servers that would otherwise not be able to use u2. (i know the backplanes wont support it soi am just gona use them internally)

Those servers are mainly used as hypervisors and my goal is to get better iops performance, bc the current intel dc sata ssds can barrely do 35k write each, the u2 i have been looking at on the other side can do around 400k write each.

Just for understanding, those are simple pcie adapter not hba cards, so sas drives wont work on those, correct?
Do they have drawbacks compared to conventional trimode hbas?
Are there any kind of expander cards for those or would the only way to add new drives be, just add more cards?

Thx
 

nexox

Well-Known Member
May 3, 2023
678
282
63
As long as your board/slot supports bifurcation they work fine (I'm using a two port from 10GTek and the build quality seems nice,) though note the cables aren't cheap and each one requires a SATA power connector, might be sort of annoying inside a server. You're right they don't support SAS or anything else, they really just split the pci-e slot into four cables, if you want expander-like functionality you need a pci-e switch (search PLX,) usually also pci-e cards with more SFF8643 ports than host pci-e lanes (eg a 16x slot gives you 8 ports, with the obvious bandwidth sharing limitations that implies.)
 
  • Like
Reactions: UnknownPommes

UnknownPommes

Active Member
Aug 28, 2022
101
32
28
(eg a 16x slot gives you 8 ports, with the obvious bandwidth sharing limitations that implies.)
ok thanks, yeah i know about the cables and bandwidth part, but as far as i understand even if i use a plx card it should only really affect the seq stuff that maxes out the cable and not the iops as long as the individual sizes are small enough that they dont hit the throughput limit, right?
 

nexox

Well-Known Member
May 3, 2023
678
282
63
A PLX switch will also add a bit of latency, which could add up at higher IOPS, beyond that it depends on how many devices you're loading concurrently, you could possibly still hit the limits of the x16 3.0 slot.
 

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
Do they have drawbacks compared to conventional trimode hbas?
You're not paying rent to Broadcom and Microchip, which affects their execs' bonuses. If that's not bad enough, you have the use the PCIe stack instead of presenting NVMe devices as SCSI. You might even need to buy fewer disks to meet your IOPS targets!
 
  • Like
Reactions: UnknownPommes

UnknownPommes

Active Member
Aug 28, 2022
101
32
28
You're not paying rent to Broadcom and Microchip, which affects their execs' bonuses. If that's not bad enough, you have the use the PCIe stack instead of presenting NVMe devices as SCSI. You might even need to buy fewer disks to meet your IOPS targets!
:p
yeah i already ordered two of the 10Gtek Cards