The "hardware accelerator" in this case is more of a "hardware decelerator". It will effectively present them to the OS as a single SCSI device, rather than native NVMe devices, so all of the advancements that come out of the NVMe hardware and software stack are effectively gone. Especially if you plan to run Linux, software options are far better. I agree that if you use one of those, you *certainly* won't be maxing out the sequential speeds.
If you buy an 826, you aren't really limited by backplane upgrades, since they've been supporting that form factor for ages and likely will in the future. You can get anywhere from a SAS1 backplane to a gen5 NVMe + 24g SAS backplane. NVMe backplanes also can sometimes go a generation above what they were intended to handle, as long as they aren't one of the few switch-based backplanes.
I think this is kind of misleading, I've done back to back tests, and I notice nothing but greatness.
Let me explain.
If you are talking raw throughput and latency with a directly attached NVME drive to cpu lanes vs through anything inbetween - then yes, you are never going to have the exact same numbers, but the variances are negligible, and the overall gains are impressive...
but now some may ask 'what gains?'.... and here is the back to back testing (pays dividends if people actually test things themselves without following the internet 'hardware raid is dead' and all their arguments bandwagon which I've been frowning at for years)
When you directly lock onto cpu lanes with an ssd, you're also taxing a core or two, you take the drives away from the CPU and put them behind a well engineered accelerator/raid card, and now you are offloading that hit on the cores to the discreet card.
So do a 1TB copy between 2 gen4 drives natively connected to cpu lanes, log the cpu load usage in the background via a script.
Now, do the same with those nvme drives on the raid card, watch your cpu listening to crickets, and time wise you only lose roughly a couple of percent.
Point being, I'm building an energy efficient virtual server where I want to minimize cpu tax, and rather the CPU was listening to crickets and service my 'workstation' guest with solid ipc than be forced to pin cores and whatnot nightmare configuration just to get the 'crackle'/'lag' free experience.
Regarding the 826, it might not be that, but irrespective, honestly I am totally anti backplanes for custom builds where I don't really know where my expansion needs will head.
Because even then with manufacturer options, when you go to try and find some backplane that meets your needs, you find out that its only available via xyz with a hefty scalper tax, but even then most the time I never find what I need, if I did want to add more nvme drives, via switch chaining, or if I wanted to throw in an extra 8 sata SSDs via a sata expander chip off the back of one of the switches, again a server manfuckturer isn't going to suddenly come to my rescue and drop me a perfectly matched backplane for free if at all even available....
Cabling is free of headaches where you keep changing the foundations, I have servers running for years now with cables, and the number of times I've switched things around with nothing but smiles because of no backplane limits is a testament.
Now with a noticed industry drive to MCIO direct cabling of gen5 drives due to electrical tolerances, the joys of customisation are again becoming a trend.
PS. I'd like to add that I really only purchase Broadcom and Highpoint cables for direct connections to U3/U2 drives, and anyone who has used these official branded cables will tell you that they fit super-tight and even a little pull doesn't nudge them easily. I avoid all the mass-produced no-name crap, if peoples experience is based on mass-market junk, fair enough, but people really should get into the habit of stopping drawing broad conclusions based on such experiences, again a pet-hate of mine I have been watching silently on internet forums for years, sure budgets are budgets but my point is made. Plus I am anti WHEA (pcie aer, which a hell of a lot of boards have disabled by default and don't even give the option to end-users to enable and thus most people don't even realise their computer is doing constant retries on pcie transfers, talk about being blind) lol...
On another note I got 1 metre cables driving some gen4 drives, and yes those cables cost a fortune, but they work beautifully. A friend has been lucky enough to be playing with some gen5 MCIO cabling, and those cables cost nearly as much as some nvme drives! lol - until they go full optical, silver/copper cabling is only going to get more pricey as we're on the way to gen6 and beyond.