School me on JBOD storage

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
In the midst of upgrading my home storage server I'm considering options to separate my compute components (CPU/MoBo/RAM, etc.) from my storage devices. Main motivation is physical space. I only have about 20" of depth available where these chassis' need to go.

Forgetting how hard it is to find short depth JBOD storage chassis' (well at least ones that can support 12+ drives) for a moment, I need to be schooled a little bit on what HBA(s) I'd need. I'm currently planning to use an LSI 9400w-16i controller for my upgraded server so that I can connect both bulk media storage spinners (WD Golds) and some NVMe drives (optanes) both now and in the future. Is there a way to make this HBA work to connect to an external JBOD chassis (with it's own internal controller obviously) or would I need to use a different HBA all together that is designed specifically for JBOD connectivity?

As you can tell I've never messed with external JBODs before.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Following on from your last post about it in your other thread, it appears that you can convert an internal connection to an external JBOD relatively easily if you find the right cable - for instance for the 9400-16i HBA and a new JBOD chassis, you'd want an 8643 -> 8644 cable (older internal HBAs will use 8087 connectors and older JBODs will use 8088 connectors). These don't seem to be commonplace exactly, so don't expect them to be cheap, but they do exist:
"+getMessage("iPrintVerKit")+"

StarTech also make a nifty sorta SAS gender-bender that'll take two internal 8643 connectors and present them as external 8644 connectors; likely more expensive but neater than looping a cable out through an empty PCIe slot:
Mini-SAS Adapter - Dual SFF-8643 to SFF-8644 - 12 Gbps | SAS Cables | StarTech.com

That reminds me - IIRC you said your new build would likely be mATX with all four PCI slots taken up, which complicates matters somewhat; if you're in an ATX case then no bother, otherwise you're looking at finding or making a hole to snake the cable out of.
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
The main requirements are that you need one or more external HBA ports on your host to connect to an external port on your JBOD that is connect to the disks.

To answer your question there isn't a special type of HBA required for connecting to JBODs, although one with external ports makes the job easier. On your host the port(s) could be provided by an HBA that has an external ports or you could use one with internal ports and an adapter to convert some or all of them to external. See the link below for an example.

https://www.amazon.com/Mini-SAS-Adapter-SFF-8643-Sff-8644-Low-Profile/dp/B01L3H4N10

Assuming you are not already using all the internal ports on your HBA the adapter is cost effective both in terms of money and PCIe slots.

On the JBOD side of things you'll have a number of external ports you can connect to. If your JBOD has a built in expander you will only need to connect one cable between it and your host to see all drives. If it does not have an expander the JBOD will likely have one port for every 4 drives and you'll end up connecting multiple cables as you expand how many drives you're using. In your example of 12 drives you'd need 3 cables.

The question of whether you want an expander usually boils down to the speeds you require and the cost difference you find. For media storage I don't think the speed limitations of an expander will be an issue, but with your requirements for a short depth chassis you might find the expander options more expensive.

I will say I have no experience with the hybrid SAS/NVME HBA you're looking at, but I'd be very surprised if the SAS implementation didn't follow the usual standards.

TLDR: If you've dealt with connecting drives to an HBA internally everything is going to be pretty familiar. It's just the physical implementation of running cables between two chassis rather than inside one that is different.
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
As I mentioned I have not used these cards so this is just based on my review of the user guide.

In short there is no way to connect 4 NVME drives directly to one port of the 9405W-16i.

Standard NVME adapters usually provide an oculink or SFF-8643 style connector per drive. To allow for the mixed mode 9400 series Broadcom is connecting NVME drives with what it is calling u.2 enabler cables. These cables connect to the SFF-8643 ports on the card on one end and to the backplane on the other. It looks like there are multiple SKUs depending on the backplane connection.

To allow for more than one NVME drive per SFF-8643 connector they have u.2 enabler cables with multiple backplane connectors. Some caveats I see in the user guide currently:

- Currently you can only get enabler cables/connect 2 NVME drives per SFF-8643 connector
- When connecting 2 drives to one connector they each run at PCIe 2x rather than 4x
- If using the 9405W-16i you can have a maximum of 4 NVME drives at 4x or 8 at 2x. Either configuration will not allow any extra SAS/SATA drives
- To achieve the marketed 24 NVME drives would require connecting to a PCIe switch.

This is just based on 5 minutes looking at the guide so I may have missed things and certainly have no real world experience. I encourage you to look at the guide yourself in detail and reach out to Broadcom for anything that isn't fully explained.

HBA 9405W Series x16 Host PCIe Tri-Mode Storage Adapters (174 KB)

Edit:

The actual user guide below

Broadcom MegaRAID and HBA Tri-Mode Storage Adapters User Guide (678 KB)
 
Last edited:

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
As I mentioned I have not used these cards so this is just based on my review of the user guide.

In short there is no way to connect 4 NVME drives directly to one port of the 9405W-16i.

Standard NVME adapters usually provide an oculink or SFF-8643 style connector per drive. To allow for the mixed mode 9400 series Broadcom is connecting NVME drives with what it is calling u.2 enabler cables. These cables connect to the SFF-8643 ports on the card on one end and to the backplane on the other. It looks like there are multiple SKUs depending on the backplane connection.

To allow for more than one NVME drive per SFF-8643 connector they have u.2 enabler cables with multiple backplane connectors. Some caveats I see in the user guide currently:

- Currently you can only get enabler cables/connect 2 NVME drives per SFF-8643 connector
- When connecting 2 drives to one connector they each run at PCIe 2x rather than 4x
- If using the 9405W-16i you can have a maximum of 4 NVME drives at 4x or 8 at 2x. Either configuration will not allow any extra SAS/SATA drives
- To achieve the marketed 24 NVME drives would require connecting to a PCIe switch.

This is just based on 5 minutes looking at the guide so I may have missed things and certainly have no real world experience. I encourage you to look at the guide yourself in detail and reach out to Broadcom for anything that isn't fully explained.

HBA 9405W Series x16 Host PCIe Tri-Mode Storage Adapters (174 KB)

That was really helpful @Aestr so thank you very much for that. I think I'm going to just drop the NVMe drives in my server upgrade. I was only planning to use 2 Optanes and in all honestly I don't even need that level of performance. Some high performing 12Gbps SAS3 SSD's should suffice just fine and will make my life a lot easier.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Thanks guys, I get it now.

My next question is, how would one connect a single SFF-8643 (say 1 of the 4 on an LSI 9405w-16i) to 4 NVMe drives? It seems that each drive needs it's own SFF-8643 connector.

I'm looking at one of these for example without it taking up all 4 ports on the HBA.

https://www.servethehome.com/icy-do...-b-4-bay-2-5-nvme-u-2-ssd-mobile-rack-review/
Even if you could, would you really want to? Wouldn't you be muzzling the NVME drives down to 48gbps combined (assuming a 12gbps port)? That's ~4.5GBps which would be about the bandwidth of two of those drives?
 
  • Like
Reactions: dswartz

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Even if you could, would you really want to? Wouldn't you be muzzling the NVME drives down to 48gbps combined (assuming a 12gbps port)? That's ~4.5GBps which would be about the bandwidth of two of those drives?
So how does one connect 24 NVMe drives to it? Expander?

On a related note, are there any drive bay cages (ie. ICY DOCK) that can connect multiple 12Gbps drives with a single SFF-8643 cable at full speed?
 

itronin

Well-Known Member
Nov 24, 2018
1,233
793
113
Denver, Colorado
edit: I'm talking about SAS SSD and HDD's here not NVME... not sure if @IamSpartacus is still looking for nvme or going back to SAS/SATA SSD's.

Each SFF-8643 carries 4 channels which basically means 4 drives at 1:1 - so a single bay for 4 x 2.5" drives...

However it looks like Icy Dock is no longer making a single bay "mini-sas" 4 drive hdd/ssd drive bay (up to 15mm).

They do have a 2 bay 8 drive (up to 15mm) unit which takes two SFF-8643 or "mini-sas" connectors. so in essence 1 cable per drive bay used... dunno if that fits your use?

MB508SP-B_2.5" HDD/SSD CAGES_ICY DOCK manufacturer Removable enclosure, Screwless hard drive enclosure, SAS SATA Mobile Rack, DVR Surveillance Recording, Video Audio Editing, SATA portable hard drive enclosure

each drive should have a full 12Gbps since its a 1:1 ratio...

If you are only looking at SSD's and they are 7mm (or less) you may be able to to use the single bay 8 drive unit but again you'll see two cables there to handle all 8 drives.

MB998IP-B_ToughArmor Series_2.5" HDD/SSD CAGES_ICY DOCK manufacturer Removable enclosure, Screwless hard drive enclosure, SAS SATA Mobile Rack, DVR Surveillance Recording, Video Audio Editing, SATA portable hard drive enclosure

also looks like they switched to sata power connectors ... wonder if the 3.3v is wired all the way through...
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
So how does one connect 24 NVMe drives to it? Expander?
I assume you meant 4 NVME drives here...?

NVME doesn't do expanders, as PCIe is a point-to-point protocol (it's probably best thought of as PCIe onna stick onna wire; if you want to bifurcate it further than the HBA offers you'd basically be talking a PLX switch chip or similar (do such things exist?), and you'd still be bandwidth constrained between the PCIe switch and the HBA - the drives wouldn't get to perform to their utmost, which somewhat defeats the purpose of using NVME in the first place (depending on usage patterns of course - you might not get anywhere near the limits).

I'm guessing that PCIe AIC optane's are out of the question as you're out of PCIe slots?

Some high performing 12Gbps SAS3 SSD's should suffice just fine and will make my life a lot easier.
If you don't need the performance of NVME and can still afford the big fat SAS SSDs then this is by far the simpler solution. With an all-flash or mostly-flash array, RAID performance will be sky high anyway so that, for non-enterprise use at least, stuff like SLOGs will likely be of limited utility (and an L2ARC likely pointless).
 

Haitch

Member
Apr 18, 2011
122
14
18
Albany, NY

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
Even if you could, would you really want to? Wouldn't you be muzzling the NVME drives down to 48gbps combined (assuming a 12gbps port)? That's ~4.5GBps which would be about the bandwidth of two of those drives?
It isn't running NVME over a 12/48gbps connection. These cards have the ability to switch each connector between SAS and PCIe modes. When plugging in NVME drives or PCIe switches to one of the SFF-8643 ports the port provides full PCIe 4x speed. This could be provided directly to one NVME drive, split between 2 drives with a cable, providing PCIe 2x to each, or connected to a PCIe switch which would be able to share PCIe 4x bandwidth between however many devices it supports.

@IamSpartacus yes to achieve the listed 24 drives you would need PCIe switches involved. They'd act much like an expander does with SAS/SATA drives.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
It isn't running NVME over a 12/48gbps connection. These cards have the ability to switch each connector between SAS and PCIe modes. When plugging in NVME drives or PCIe switches to one of the SFF-8643 ports the port provides full PCIe 4x speed. This could be provided directly to one NVME drive, split between 2 drives with a cable, providing PCIe 2x to each, or connected to a PCIe switch which would be able to share PCIe 4x bandwidth between however many devices it supports.

@IamSpartacus yes to achieve the listed 24 drives you would need PCIe switches involved. They'd act much like an expander does with SAS/SATA drives.
Got it.

Sorry, I missed the part about the x16 host adapter. That being said, even with that adapter, up to 4 NVME drives, you'd get full bandwidth, but anything beyond that, you're potentially muzzling them. I say potentially with the assumption that you're striping them. If not, then it's just a switched topology and you'd get full bandwidth to each drive, assuming you don't access more than 4x at the same time.
 

gregsachs

Active Member
Aug 14, 2018
559
192
43
Not sure if this is useful, but this is how stuff shows up in MSM for me: (H-V 2016).
I've got 2x LSI boards, both with 23.34-0019 firmware/MR 5.14.
All drives needed to be set as JBOD in MSM. That did not show up as an option in earlier firmware.
The integrated 2208 runs the SSD, two in a Raid 1 and two are part of my storage pool I also have a junk SAS drive for bare metal backup.
The 9285cv-8e runs a xyratex box with 12x 2tb sas drives in JBOD that are all in storage pool. All drives show as normal drives in computer management. I've got another homemade JBOD that is currently powered off, but shows the same way except the enclosure is a Intel RES2V0240 or whatever the expander is.
As stated, you should just need a cable that goes internal to external connector and matching connectors.
 

Attachments

SniperCzar

New Member
Jan 16, 2019
2
0
3
If you want to do this, I'd recommend one of the newer SuperMicro passthrough backplanes. They seem similar to Dell's R630 10 bay setup where the lower numbered bays are connected into a standard SAS3 HBA and the highest four bays have a separate connection via individual 8643 Mini SAS HD cables each to PCI-E x4. On the Dell servers this is handled by a PCI-E card with a PEX8734 that provides the quadruple PCI-E x4 cleanly without utilizing all your PCI-E slots. On the SuperMicro systems I suspect they use four OCuLink ports off the motherboard directly without an expansion card, which is cheaper than switching the PCI-E fabric if your board supports it. From what I can tell OCuLink is essentially an enterprise version of a mining riser type cable but without a generic USB connector.

I'm not sure if converting at both ends (host>SFF-8644>JBOD) will work or whether you'll run into issues with the REFCLK signals based on what I see on page 10 here https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2015/20150811_FA12_Allen.pdf

I suspect converting at both ends with the same adapters will put all the pins in the right place much like coupling a pair of old-school RJ45 crossover cables together but it may not follow a "standard" for external PCI-E like OCuLink or PCI-E 4.0 external cabling (just a hunch, I haven't actually read all the specs)

Specifically you'd need a chassis with something like a BPN-SAS3-826A-N4, 12 ports worth of 8643-8644 adapters (6 ports SFF-8644 on both host and JBOD), SAS3 HBA, and PCI-E HBA. Add in the internal and external cabling and it's going to cost you a bundle, and may not even work depending on whether the pinout all lines up where it's supposed to be. If it did all work, you'd have 8 ports SAS3 and four ports NVME, all hotswap.

Edit: This seems to be the SuperMicro JBOF PCI-E external HBA - AOC-SLG3-4X4P | Add-on Cards | Accessories | Products - Super Micro Computer, Inc.
 
Last edited: