HighPoint SSD7540 PCIe 4.0 x16 8-Port M.2 NVMe Card - Thoughts?

bryan_v

Active Member
Nov 5, 2021
116
50
28
Toronto, Ontario
www.linkedin.com
Trying to purchase a single HoneyBadger from Liqid is like pulling teeth, and now Highpoint has released an 8-port PCIe4 M.2 Card through normal channels.

Has anyone tried out this card, and/or compared it to the HoneyBadger LQD4500?

The Liqid devs said they modified the Switch firmware code to make allow it to handle NVMe I/O better and achieve higher IOPS and bandwidth than a normal PLX chip; however Microsemi and a few others released specialized storage switches last year, and I get a feeling this uses one of them.

I also feel like one of these cards with 8x Samsung 980, or even Sabarent Rockets would be a more compact, cheaper solution than a U.2 chassis and stack. Would make a perfect in-chassis storage for AI workloads, or a shared storage server for DBs, VMs and workstations.

Highpoint: SSD7540-overview
Amazon: Amazon.com (USD $1,264.24 - 26JAN2022)
 

i386

Well-Known Member
Mar 18, 2016
2,915
920
113
32
Germany
The 8port version only supports m.2 ssds up to 80mm, no enterprise m.2 ssds with plp and 110mm length
I also feel like one of these cards with 8x Samsung 980, or even Sabarent Rockets would be a more compact, cheaper solution than a U.2 chassis and stack.
U.2 is enterprise stuff with powerloss protection.
For all the project I've worked in the last 10+ years even a slightly chance for data corruption/loss was not acceptable...
 

bryan_v

Active Member
Nov 5, 2021
116
50
28
Toronto, Ontario
www.linkedin.com
Didn't see the 80mm limitation, but good to know. Thankfully both the 980 and Rocket are both 80mm.

As for the powerloss stuff, I'm more from the old school Google/ZFS approach and just assume hardware will fail and cover power loss scenarios through software, rather than throwing money at powerloss protection at every stage of the stack and hope that they don't all fail, or data is not corrupted in the process. Anyways, any sane developer or architect will always assume a non-zero RPO when it comes to data, and if you're counting on a zero RPO in the storage stack, you've probably got some much larger single points of failure that need to be addressed.
 

bryan_v

Active Member
Nov 5, 2021
116
50
28
Toronto, Ontario
www.linkedin.com
LOL, I'm not sure if you remember the original design philosophy for ZFS which was to assume that hardware will always fail, most often permanently, regardless of whether or not they are cheap consumer devices or expensive enterprise equipment. OG Google had the same view point when they built everything basically out of scrap computer parts. It's easy to get distracted by small scale measures and issues that might brick a node or stack element, but solving them is never an excuse to ignore that scenario completely. Redundancy and loss-mitiagtion will always get you farther than failure-prevention or even loss-prevention.

But don't worry, I'm sure you're very good at your job; you've been at it 10+ years.
 

jpmomo

Active Member
Aug 12, 2018
404
134
43
bryan_v, you can pm me for some details at least on a pair of 7505 cards. these are 4 x m.2 cards each not the 8 x m.2. my objective for the testing I was doing was to fully utilize the x16 pcie gen4 slots. You can also look at my previous post on this testing which I was able to get over 300Gbps r/w on the "cheap"!


I am looking into the graid cards which are supposed to do all of the raid off of the main cpu and onto their card which is a form of gpu. Going that route I wouldn't use the highpoint cards but use a couple of pcie gen4 aic (from gigabyte) which are also 4 x m.2 but even cheaper. the graid card would be tested with a dell r750 with 24 nvme u.2 drives as the main use case. It would also be interesting to see something like you are proposing with a few m.2 aic with pci switches. You could get 32 of the m.2 cards (2TB seagate firecuda 530s) in 4 of those cards. That should all fit in something like an asrock rack romed8. You would obviously be switching the 8 m.2s in each card into an x16 pci gen 4 slot. I am not sure how the 8 vs 4 m.2s per card would impact overall performance.

what type of pricing is the honeybadger solution? I always assumed that was in a different league compared to something like highpoint.
 

bryan_v

Active Member
Nov 5, 2021
116
50
28
Toronto, Ontario
www.linkedin.com
I keep asking Liqid for pricing, but the rep never follows through and sends me pricing. I suspect because I only asked for a single card and I asked just before Christmas.

The 7505 data looks promising. The Liqid dev in the LTT forum (when he was active) said you would really notice stuff if you run RAID of the CPU; and at some point drives will just start dropping out of an array (or worse lock up an entire array). Apparently normal PCIe switch chips are not NVMe aware, so they don't know which commands and interupts are more latency sensitive than others. Anyways I wouldn't trust any card-based raid solution, I had a scenario 15 years ago with a SATA RAID card and the card died; there was no way to get the data off the drives without buying a new card.

My hope was that they used an NVMe-aware switch so I could just use them for redundant object storage. So in a single 1U chassis with a last-gen EPYC stack, you could fit a minimum, of 8TB x 8 SSDs x 4 cards = 256TB of "cheap" SSD storage. That's way cheaper cost per TB than a Supermicro EDSFF E1.L Server (1029P-NEL32R) that would set you back about CAD $75k (I think that's ~$60k USD) - probably about 1/3 of that cost. That means you could have a 3 node geo-HA for the same cost, as a single Supermicro storage node.

Scrap computer parts FTW!
 

jpmomo

Active Member
Aug 12, 2018
404
134
43
isn't the highpoint solution considered a "card-based raid solution"? or are you just trying to find an add in card that supports 8 m.2 drives and use sw to take care of the raid? I thought gigabyte was supposed to release an 8 x m.2 add in card that would just do the pci switching that could be used for sw only raid.
 

bryan_v

Active Member
Nov 5, 2021
116
50
28
Toronto, Ontario
www.linkedin.com
Hey @jpmomo , just did a search, GB only released a 4x M.2 passthrough carrier card :(

Yea the idea is find an alternative to the HoneyBadger, and Highpoint seems to be the only one on the market. Although there are quite a few NVMe focused PCIe switches hitting the market so maybe this year we'll see more.
 

Mithril

Active Member
Sep 13, 2019
231
47
28
considering the cooling requirements, and cost why not 2 cards with 4x M.2? IIRC there were some decently priced cards someone linked on aliexpress. If you are pairing it with ZFS or some other none RAID soft-raid solution I don't think "NVMe aware" is needed (and without some solid repeatable testing and proof, that sounds like 1000% marketing smoke up your backside, which the storage industry has plenty of).

Unless your real-world workloads are all best-case anyways, you could stuff 4 modern NVMe on the same 4 Gen 4 lanes via a switch and you have a hard time figuring out which is which in a double blind via end-user tasks alone.

I would say for ZFS PLP still matters somewhat, but I'd opt for an optane SLOG device to protect the journal rather than worrying about each drive, unless you absolutely need massive SYNC performance
 

jpmomo

Active Member
Aug 12, 2018
404
134
43
As mentioned I have both the highpoint (raid) and gigabyte (4x m.2 aic). My question to Bryan was why was he looking into either the Liqid or highpoint cards and not just a cheap aic card like the gigabyte (asus also makes a cheap pcie 4.0 aic.) the gigabyte aic are cheap and pcie 4.0 x16. if you used the romed8 mb, you could fit 6 or even 7 of those cards (slot 2 on that mb is configurable). If he was trying to avoid a hw raid card failure and just rely on sw based raid, it might be better to go the cheap aic route. I was planning on combining the graid card (gpu based nvme raid) with the cheap gigabyte aic (with as many m.2 cards as I could fit) due to performance reasons.

I did see someone in this forum show some pretty impressive fio stats with some sw based raid and a few sm based AICs. I don't think the graid cards are that cheap and may not get much more performance vs the 50GB/s that the other solution was able to get.
 

bryan_v

Active Member
Nov 5, 2021
116
50
28
Toronto, Ontario
www.linkedin.com
Yea there aren't too many AICs with a PCIe switch to allow you to slap 8x M.2 drives into a x16 slot.

Though there is definately a shortage of PCIe switch chips because even the cards for 4x M.2 in an x8 slot are missing from amazon and the distributor inventories.
 

jpmomo

Active Member
Aug 12, 2018
404
134
43
There are plenty of the 4 x m.2 PCI gen4 x16 cards available both on eBay and Amazon. They are pretty cheap as well. You can get the full bandwidth as they are x16. The m.2 are x4 so no switching is needed.