Supermicro 4U Jbod connector

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gtech1

Member
May 27, 2019
78
6
8

itronin

Well-Known Member
Nov 24, 2018
1,233
793
113
Denver, Colorado
In short that card will work, its also SAS3 but...

Be aware that the JBOD chassis in your link has 4x SFF-8088 (mini-sas) connectors and your linked card has SFF-8644 (mini-sas HD). You may want to verify connectors on the JBOD chassis and the expander model #. (more pictures via bay link) before you order your HBA and cables.

It looks very possible that the expanders in this chassis will be SAS2, especially if you are getting it used (see above link).

Depending on how its cabled together inside you may only need 1 cable if the two expanders inside are daisy chained though 2 would be better IMO. I've seen a picture for this product that was labeled as 2 SFF-8087 for the front (primary, secondary), and the other two for the rear (primary secondary) implying they were not daisy chained.

What OS are you going to run? The 9201-16e which is probably about $50USD or less via the bay would be fine if you are going to run spinners.
It has the added advantage of also using SFF-8088 so you'd have the same connectors on both ends. No JOY for ESXI 7 but lots of other OS's are happy with that card.

It also depends on how much you want to spend I suppose.
 
Last edited:

gtech1

Member
May 27, 2019
78
6
8
Thank you for the detailed response. I don't know how many connectors there are inside, I assume there's one for each backplane at least ? And there's two backplaces, one in the front and one in the back ?

If each backplane outputs one connector then I need 2 cables to connect these 2 ?

Or is it possible as you said to daisy chain all backplanes and just output one cable ?

These will run spinning WD Ultrastar drives and the main system will run FreeBSD with ZFS on these
 

gtech1

Member
May 27, 2019
78
6
8
I got the JBOD today, there are really 4 8088 ports in the back and there's 4 wires going to the backplanes.
Being an idiot, I purchased the 9201-16e cards because it also has 4x 8088 ports for my 4U main body without realizing I can't fit it in there as its full height...

Therefore, what kind of card do I need in order to connect to those 4 x 8088 ports ? Do I need to connect each port ? or just one of them ?
 

Iaroslav

Member
Aug 23, 2017
111
24
18
37
Kyiv
I've got the same JBOD a week ago. Probably with the same cabling - single expander backplanes, two links per backplane for "IN" (PRI
_J0) and "OUT" (PRI_J1). So not touching the inside I just connected front backplane "OUT" with back's "IN" with short SFF-8088 - SFF-8088 to cascade and that's it) Sure thing, you may cascade the backplanes internally, with SFF-8087 cable as well, obviously it's less weird.
Then you just put a link from the front backplane "IN" to your main chassis HBA, SAS2 or SAS3 (in that case you need SFF-8088 to SFF-8644 cable). And have one "OUT" left to cascade further.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
You want to run a 45 drive JBOD over a single SAS interconnect?? Let's do some math.

A single 8088 cable/connector = 4x6gbps = 24gbps.
A modern spinning drive can do 200MB/s sustained = ~2gbps. (I'm not even taking into consideration the cache on the disks.)
45x modern spinning drives = 45x2gbps = 90gbps.

The JBOD chassis has 4x 8088 connectors, with each giving 24gbps for a total of 96gbps.

umm...
 
  • Like
Reactions: Madhelp and pgh5278

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
A modern spinning drive can do 200MB/s sustained = ~2gbps. (I'm not even taking into consideration the cache on the disks.)
That's only true for empty hdds when data is written to the outer sectors. That's why all the datasheets say "up to" or "max" xxx MByte/s.
The more data is stored, the slower the drive becomes.
Example seagate exos 16TByte: empty ~245MByte/s, ~80% full ~100MByte/s, ~95% full <70MByte/s

Edit: the numbers for the exos 16TByte are not necessarily accurate, I didn't benchmark with different workloads
 
Last edited:

Iaroslav

Member
Aug 23, 2017
111
24
18
37
Kyiv
Normally you don't use that kinda setup for 100Gbps outputs anyway
Single expander SAS2 backplane is 24Gbps, and dual is probably for failover/multipath only.
So the best effort I suppose is to use fast enough HBA(s) to connect each backplane independently (2x24Gbps) and do not forget about pcie bandwidth - gen 2 x8 is 4000MB/s and gen3 x8 is 7880MB/s
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
That's only true for empty hdds when data is written to the outer sectors. That's why all the datasheets say "up to" or "max" xxx MByte/s.
The more data is stored, the slower the drive becomes.
Example seagate exos 16TByte: empty ~245MByte/s, ~80% full ~100MByte/s, ~95% full <70MByte/s

Edit: the numbers for the exos 16TByte are not necessarily accurate, I didn't benchmark with different workloads
Sure. I was trying to do "easy" math.

Now, throw in the on disk cache... :)

My point is, you don't want to bottleneck your chassis/interconnects. These things are racked/cabled and forgotten. You want to do it once, do it right and never think about it again.

Normally you don't use that kinda setup for 100Gbps outputs anyway
Single expander SAS2 backplane is 24Gbps, and dual is probably for failover/multipath only.
So the best effort I suppose is to use fast enough HBA(s) to connect each backplane independently (2x24Gbps) and do not forget about pcie bandwidth - gen 2 x8 is 4000MB/s and gen3 x8 is 7880MB/s
Why not? The chassis/backplane(s)/interconnects all support it. If you filled that JBOD up with 45x SAS SSDs... :)

Single expander SAS2 backplanes are not all 24gbps. Almost all Supermicro (and others) support a "wide SAS" connection, which means your HBA-->Backplane is 2x SAS2 = 48gbps.

Dual expander backplanes are...well, complex. I won't get into them.
 

Iaroslav

Member
Aug 23, 2017
111
24
18
37
Kyiv
Why not? The chassis/backplane(s)/interconnects all support it. If you filled that JBOD up with 45x SAS SSDs... :)

Single expander SAS2 backplanes are not all 24gbps. Almost all Supermicro (and others) support a "wide SAS" connection, which means your HBA-->Backplane is 2x SAS2 = 48gbps.
Yes, I agree, there are plenty of setups you may go with) For a JBOF I'd better use 4U 72-bay chassis, 2x capacity, no adapters needed
BTW, can you please give a link to this "wide SAS" explained? Cant find anything about this in the backplane manual.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
can you please give a link to this "wide SAS" explained
It's often described/explained as "dual link" : two multilane sas ports on the hba/raid controller connect to two multilane sas ports on the backplane increasing the total bandwidth between expander and hba/raid controller.
 
  • Like
Reactions: Iaroslav

Iaroslav

Member
Aug 23, 2017
111
24
18
37
Kyiv
It's often described/explained as "dual link" : two multilane sas ports on the hba/raid controller connect to two multilane sas ports on the backplane increasing the total bandwidth between expander and hba/raid controller.
Now, what's the easiest way to make a bandwidth test?)
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Now, what's the easiest way to make a bandwidth test?
Testing would require enough storage devices that could saturate a single link.
I would just look at the controllers link status & enclosure info:
Adaptec.JPG
(This is from my Adaptec/Microchip raid controller)
 

gtech1

Member
May 27, 2019
78
6
8
I've got the same JBOD a week ago. Probably with the same cabling - single expander backplanes, two links per backplane for "IN" (PRI
_J0) and "OUT" (PRI_J1). So not touching the inside I just connected front backplane "OUT" with back's "IN" with short SFF-8088 - SFF-8088 to cascade and that's it) Sure thing, you may cascade the backplanes internally, with SFF-8087 cable as well, obviously it's less weird.
Then you just put a link from the front backplane "IN" to your main chassis HBA, SAS2 or SAS3 (in that case you need SFF-8088 to SFF-8644 cable). And have one "OUT" left to cascade further.
I'm confused. You used 8088 cables inside the jbod to connect front and back backplanes ? But you also said not touching the inside :)
 

gtech1

Member
May 27, 2019
78
6
8
It's often described/explained as "dual link" : two multilane sas ports on the hba/raid controller connect to two multilane sas ports on the backplane increasing the total bandwidth between expander and hba/raid controller.
So two of the 4 external ports are meant to further cascade another jbod chassis ?

Wouldn't it be faster, if I ever want to add another jbod chassis to connect it directly to its own HBA on the main chassis ?
 

Iaroslav

Member
Aug 23, 2017
111
24
18
37
Kyiv
I'm confused. You used 8088 cables inside the jbod to connect front and back backplanes ? But you also said not touching the inside :)
photo_2020-11-26_16-04-01.jpg
Like @kapone said - you put it in the rack and forget. Don't have any chassis to use this 4-port connector, plus now I have all the connectors accessible if I need to link anything directly.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
So two of the 4 external ports are meant to further cascade another jbod chassis ?
It depends how the backplanes are wired :D
There are two backplanes, one has 3 multilane sas ports, the other has 2 multilane sas ports.

In the 36 bay version I would connect the second backplane with a single cable to the third multilane sas port on the first backplane and then route the two other ports outside :D