PCIe 4.0 HBA's?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jeffarese

New Member
Sep 10, 2019
8
0
1
I was wondering if there's going to be HBA's that use PCIe 4.0 so there can be double the ports in a single HBA, it would be good for a x570 powered NAS.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I am sure we will see them soon, HBA from LSI today support even NVMe so PCI4 makes total sense then.
 

jeffarese

New Member
Sep 10, 2019
8
0
1
Yes there will be.

In 2020, as part of the U.3 transition, we discussed here https://www.servethehome.com/toshiba-aggressively-pushes-pcie-gen4-nvme-ssds/ one of the big benefits is that backplanes work across different types of drives. Usually, the storage guys are a bit behind the platform guys. Ice Lake Xeons in 2020 is when all of the current systems get re-designed.
Nice!

I think that makes X570 a very interesting choice for a low-power yet powerful NAS, since you could easily add more drives than you could fit in almost any case with some PCIe 4.0 lanes

What do you think?
 

nezach

Active Member
Oct 14, 2012
210
128
43
PCIe 4.0 cards from LSI will be based on Aero/Sea (3816/3916) controllers. They have been working on adding mpt3sas driver support for a while.

Here is one recent commit describing different performance modes that will be supported by new controllers

scsi: mpt3sas: Introduce perf_mode module parameter · torvalds/linux@ca7e1e9

The most interesting bit of info from that commit:

Code:
4k Random Read IO performance numbers on 24 SAS SSD drives for above three
permormance modes. Performance data is from Intel Skylake and HGST SS300
(drive model SDLL1DLR400GCCA1).

IOPs:
 -----------------------------------------------------------------------
  |perf_mode    | qd = 1 | qd = 64 |   note                             |
  |-------------|--------|---------|-------------------------------------
  |balanced     |  259K  |  3061k  | Provides max performance numbers   |
  |             |        |         | both on lower QD workload &        |
  |             |        |         | also on higher QD workload         |
  |-------------|--------|---------|-------------------------------------
  |iops         |  220K  |  3100k  | Provides max performance numbers   |
  |             |        |         | only on higher QD workload.        |
  |-------------|--------|---------|-------------------------------------
  |latency      |  246k  |  2226k  | Provides good performance numbers  |
  |             |        |         | only on lower QD worklaod.         |
  -----------------------------------------------------------------------

Avarage Latency:
  -----------------------------------------------------
  |perf_mode    |  qd = 1      |    qd = 64           |
  |-------------|--------------|----------------------|
  |balanced     |  92.05 usec  |    501.12 usec       |
  |-------------|--------------|----------------------|
  |iops         |  108.40 usec |    498.10 usec       |
  |-------------|--------------|----------------------|
  |latency      |  97.10 usec  |    689.26 usec       |
  -----------------------------------------------------
 

sweden

New Member
Dec 6, 2020
2
0
1
Any new information regarding this? I can't seem to find any news about these cards
 

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
Broadcom 95xx cards are PCIe 4.0 and have been available for some time now.
The new interesting features are pcie 4.0, u.3 and for the raid controllers ddr4 2666MHz support

HBAs:
Broadcom 9500-8i: HBA 9500-8i Tri-Mode Storage Adapter
Broadcom 9500-8e: HBA 9500-8e Tri-Mode Storage Adapter
Broadcom 9500-16i: HBA 9500-16i Tri-Mode Storage Adapter
Broadcom 9500-16e: HBA 9500-16e Tri-Mode Storage Adapter

Raid controllers:
Broadcom MegaRAID 9560-8i: MegaRAID 9560-8i
Broadcom MegaRAID 9580-8i8e: MegaRAID 9580-8i8e
Broadcom MegaRAID 9560-16i: MegaRAID 9560-16i
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Has anyone actually got their hands on one of the 9500 HBAs yet?

I saw that the 9500-8i was specced to use an anaemic 6W rather than the 10-15W I'm used to for my other 8i's and I wondered if anyone could verify. That combined with the single-port x8 SFF-8654 would make it a good fit for the sort of compact NAS box that I do a fair few of.
 

i386

Well-Known Member
Mar 18, 2016
4,217
1,540
113
34
Germany
Not yet..

(I want to get a 16 port pcie 4.0 raid controller and see what happens when I connect all ports to a 846 sas3 expander backplane :D
16x 12GBit/s 196 GBit/s ~ 19.6 GByte/s
x8 pcie 4.0 ~ 15.7 GByte/s)
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Has anyone actually got their hands on one of the 9500 HBAs yet?

I saw that the 9500-8i was specced to use an anaemic 6W rather than the 10-15W I'm used to for my other 8i's and I wondered if anyone could verify. That combined with the single-port x8 SFF-8654 would make it a good fit for the sort of compact NAS box that I do a fair few of.
if 6w is true that’s a nice saving for sure.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
It would be indeed. I was getting close to being tempted to think about buying one (and a 8654 -> 2x8643 cable) to try out.

Reading through the spiel though I see they're making a big thing about signed firmware, and looking through the downloads there's also a dearth of tools like sas[2|3]ircu and sas[2|3]flash which makes me think this generation are going to be far, far less crossflash-friendly. Broadcom will be broadcom I suppose...
 
  • Like
Reactions: Sleyk

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
You are absolutely right. Broadcom is seemingly going out of their way to stop this crossflashing business we so much like :.)
I think the only surprise I have here is that broadcom didn't try and ruin all of this much sooner...!

The only cables I've seen here in rightpondia with SFF-8654 connectors (I was looking specifically for a SFF-8654 -> 2xSFF-8643) are those made by Broadcom themselves, I dare say other vendors will follow in time if this is still an emerging standard.

They don't seem to have any SFF-8654 -> 2xSFF-8077 cables in existence and there's two types of SFF-8654 -> 2xSFF-8643 cables it seems - one for NVME (p/n 05-60002-00) and one for SAS/SATA (p/n 05-60003-00) - so it looks like mixing and matching of SAS/SATA and NVME on the same controller might not be possible.

I was rather hoping for U.3 connectivity so we could start getting unified backplanes that could take NVME, SAS and SATA drives with a single set of HBAs and cables but it seems we're still a fair way from that.
 
  • Like
Reactions: Sleyk

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Ok my friends, here is my tech blog where I did a long write-up of all that I know and found out. Will make a direct thread post about it as well.

Check it out here: Sleyk's Tech Blog
Firstly, thanks for the read. The cable situation does indeed seem to be even more bizarre than my preliminary research had suggested; luckily I'd be well served for my current desires by the regular 05-60003-00 8654 -> 2*8643 SAS/SATA breakout but glad to see an 8654 -> 8*8784 is technically possible.

Secondly, you're actually mad for making one :p You probably need to talk to a doctor about your caffeine intake.

Thirdly... how DARE you use a serif typeface for some cable numbers, and a sans-serif for others?! You've not even stuck with "serif for odds, sans-serif for evens" or anything remotely sensible :mad: Post reported.
 
  • Like
Reactions: Sleyk

Markito

New Member
Apr 19, 2021
3
0
1
For some reason it keeps getting detected as "potentially spam" even though I am a human being... so I created an account. Hopefully that will let me ask :)

I want an LSI card for FreeNAS that will last a while, and that'd be a 9500 series card. Would love to see some benchmarks before I buy, too. Any hope of STH refreshing the top-picks-freenas-hbas page on this site to include that line of cards? :D Skimmed Sleyk's write-up and loved it (will read later after work), and it looks like there's a way to go for the ecosystem to catch up when using PCIe 4.0 cards.

The machine I'm thinking of building is something like this:

AMD 5800X
ASUS WS Pro x570 ACE motherboard (with "three" x8 PCIe 4.0 slots; don't ask me how that third slot says 8x but goes through the chipset)
128GB ECC RAM
LSI card
(SECOND? LSI card? Maybe if using 9300 or 9207, I'll need two. But 9500 series... only one?)
Dual port 10gbe NIC in the 8x/4x slot
Cheapo 1x PCIe lane video card
1TB (or 2TB max) NVMe for boot/swap/fast storage that doesn't need redundancy

I want to run ESXi as the bare-metal OS and have a bunch of VMs:
  • FreeNAS running PCI-passthrough to the LSI card
  • PLEX server
  • Security camera VM
  • Personal Win10/MacOS VMs
  • Linux VMs
Thoughts? :) I'd much rather use a PCIe 4.0 card. But not sure about compatibility yet; ESXi supports it but will FreeNAS?
 

im.thatoneguy

Member
Oct 28, 2020
31
8
8
It's strange/unfortunately that the 9500 cards are exclusively PCIe Gen4 x8. The 16i/16e cards mean that your NVMe drives are limited to four gen4 x2 (4GB/s) instead of four drives at the full x4 (8GB/s) bandwidth. That drives the cost up pretty substantially if you only run 2 drives per card.
 
Last edited: