LSI 9500 / 9600 with m.2 nvme drives questions.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tomtom13

New Member
Aug 21, 2017
23
6
3
43
Hi,
Maybe I've boxed my self into the corner, I'm trying to create a simple storage solution where I take multiple M.2 drives, and connect those to LSI controller. I've tried 9500-i16 and 9600-24i and on both drives are not discovered. Yes I've used some cheapo M.2 NVME to SFF8654 8i Converters to attach those, but no cigar.

So, questions are:
- is what I'm trying to achieve actually a supported feasible thing to do ?
- have anybody done it before ?
- if feasible, what hardware was used ?
 

mattventura

Well-Known Member
Nov 9, 2022
723
388
63
In theory, it should work, but is suboptimal. LSI/Broadcom tri-mode HBAs do not provide direct access to NVMe drives. They expose them as SAS drives. There are a few advantages (like having seamless hotplug regardless of host support), but otherwise, it will be slower due to not being able to use the NVMe software stack.

If that adapter didn't work, I wonder if the 9500 and 9600 are like the 9400 in that they require cables with a proprietary pinout to connect NVMe drives? This adapter is specifically marketed as being for Broadcom cards.
 
  • Like
Reactions: abq

tomtom13

New Member
Aug 21, 2017
23
6
3
43
@mattventura thanks for the hint with the adapter ! I will investigate is (ie, try to track it down here in uk -> buy it -> test it) and see where it get's me.

Meanwhile, I hear your point about slow down. After posting this, I found some posts here about highpoint and found on their website: "Rocket 1604A" but I can't find any confirmation whenever it would work in x8 port (I only have gen5 x8 free). I presume, based on what you said, that would perform night and day better than my crappy lsi 9600 ... but the price is tad high. Are there any reasonable options to put 4 (or more) nvme into x8 port (preferably >= gen4) ?
 

tomtom13

New Member
Aug 21, 2017
23
6
3
43
Bumb.
Anybody knows anything about HighPoint card that have x16 requirement running in x8 ports ?
 

i386

Well-Known Member
Mar 18, 2016
4,805
1,865
113
36
Germany
What ssds were used?
What cable was used? (There are 4i version that would connect only one x4 device)
Did you use a molex to sata adapter by any chance?

Edit: fixed a typo
 

tomtom13

New Member
Aug 21, 2017
23
6
3
43
4tb nvme kc3000, cables - i8, lsi9600 and that adapter can only fit i8 cable, no molex to sata - just straight sata from psu.

Still does anybody know anything about highpoint x16 cards being used in x8 ports ?
 

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
In theory, it should work, but is suboptimal. LSI/Broadcom tri-mode HBAs do not provide direct access to NVMe drives. They expose them as SAS drives. There are a few advantages (like having seamless hotplug regardless of host support), but otherwise, it will be slower due to not being able to use the NVMe software stack.

If that adapter didn't work, I wonder if the 9500 and 9600 are like the 9400 in that they require cables with a proprietary pinout to connect NVMe drives? This adapter is specifically marketed as being for Broadcom cards.
I have just came by this thread because I wanted to do the same thing as the OP and this looks like a stopper.
When you say slower, how much slower are we talking, and are there any other disadvantages? What does "do not provide direct access" actually mean in this context? There is some sort of emulation layer or something?
 

mattventura

Well-Known Member
Nov 9, 2022
723
388
63
I have just came by this thread because I wanted to do the same thing as the OP and this looks like a stopper.
When you say slower, how much slower are we talking, and are there any other disadvantages? What does "do not provide direct access" actually mean in this context? There is some sort of emulation layer or something?
I don't know how it works internally within the card, but you will not see any /dev/nvmeX devices, you'll only see /dev/sdX drives because they are presented to the system as SCSI drives. Even if the card doesn't slow things down at all nor add any latency, it means you won't be able to use the NVMe software stack and instead have to use the SCSI software stack. NVMe's stack is geared towards high-performance flash devices, whereas SCSI is not.

I see from the other thread, you're looking to use M.2 devices. If your board supports bifurcation, then just use a simple two-card M.2 riser.
 

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
Unfortunately I have to use HBA due to virtualization.

What's "software stack"? I feel stupid asking this, but I never really paid attention to elaborate technical background of things. Just a basic check on Wikipedia when NVMe started to get popular to make sure it was the good stuff I wanted in my PC, heh.
 

mattventura

Well-Known Member
Nov 9, 2022
723
388
63
Unfortunately I have to use HBA due to virtualization.

What's "software stack"? I feel stupid asking this, but I never really paid attention to elaborate technical background of things. Just a basic check on Wikipedia when NVMe started to get popular to make sure it was the good stuff I wanted in my PC, heh.
The concept of an "HBA" doesn't really exist so much in NVMe, at least not in typical usage. It's either translating it to present as a different type of disk (like tri-mode HBAs), or it's just a PCIe switch. Sometimes the switch-based HBAs will have some extra goodies like being able to isolate certain errors from the host, or populating empty slots with dummy device to aid with hotplugging.

Why do you think you need an HBA for virtualization? Would it not work to pass in the PCIe devices for each individual NVMe device? If you go with a switch-based "HBA", then passing in the entire HBA might not even work the way you think it does.
 
  • Like
Reactions: Octopuss and nexox

Octopuss

Active Member
Jun 30, 2019
590
118
43
Czech republic
I had no idea I could passtrough NVMe drives. So no, I don't need HBA after all if I swap the SATA SSDs for NVMe ones. Might even save some power by not using a whole extra card at all.