Need info on what to get

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Khar00f

New Member
Apr 28, 2024
13
3
3
Hey guys, I'm a little lost when it comes to the following and hoping someone can help me make sense of it all.

I currently have a 12 bay server and I just bought a 24 bay server (raid cards not included) where I'll be moving all my hdd's (all mechanicals)

The bays are already connected to the back plane and there are 4 mini SAS cables coming out.

My question is relates to the raid cards, I'll be running Ubuntu with ZFS. So I needs the cards as passthrough.

I was looking at getting two 9300-i16 (since each one supports 16 devices) and I have 24.

I'm reading that these cards are running really hot as they draw a lot of power at 26W.

The server is Supermicro X11DPH-T.

1000069947.png

Anything you guys suggest? Are those cards OK?
 

nexox

Well-Known Member
May 3, 2023
712
298
63
If you just have 4 MiniSAS cables then you likely have expander backplanes (or two cables are missing,) so you could run the entire thing off a single 8i HBA if you weren't too concerned about peak bandwidth. Look for heatsinks on the backplanes (I'm assuming there are two of the type used in a 2U in there,) or just look up the part numbers to confirm they are expanders.

As I understand it if you need more than 8 ports then you want to get the newer and more expensive 9305 series of LSI cards, the 9300s actually multiple 8 port controllers on one card with loads of drawbacks.
 

Khar00f

New Member
Apr 28, 2024
13
3
3
Where would I find the backplane info?

I found a label with this written on

EXP SAS 50030480180AF47F
 

nexox

Well-Known Member
May 3, 2023
712
298
63
The part number will be silk screened on the PCB of the backplane, probably near the SuperMicro logo, it should start with BPN-. The expander backplanes usually have their SAS ports all grouped up, whereas the direct connect backplanes usually spread the SAS connectors down the length of the PCB to get them closer to the drives.

For example, one SAS3 expander backplane looks like: https://cdn11.bigcommerce.com/s-vsg...N-SAS3-826EL1-front__01999.1637718057.jpg?c=2
 

nexox

Well-Known Member
May 3, 2023
712
298
63
That does look like an expander, though it's hard to tell much from that angle.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,118
1,534
113
It's the SAS3 expander backplane. The non-expander ones don't have heatsinks.
 

Khar00f

New Member
Apr 28, 2024
13
3
3
The pics are too big to upload on the post, so I've added them on an Google Photos album.

Photos

Better view of the backplane
 

nexox

Well-Known Member
May 3, 2023
712
298
63
Alright that's clearly the single expander backplane, you can a 9300-8i with two cables, maybe add a second HBA for extra bandwidth, but then you're running multipath and that's nontrivial. The 16i cards are all x8 PCIe slots with a (roughly) 64Gbps limit at the 3.0 generation your motherboard runs, which is already below the bandwidth of 8x12G SAS lanes.
 

Khar00f

New Member
Apr 28, 2024
13
3
3
Sorry for sounding stupid, but wouldn't only using 2 cables out of the 4 means that half my drives won't be connected?

Also I'll be running SATA drives which I think limited to 6gbps of that makes a difference.
 

nexox

Well-Known Member
May 3, 2023
712
298
63
The SAS expander works kind of like a network switch, you could connect just one cable and access all the drives, the second one gets you more bandwidth.
 

Khar00f

New Member
Apr 28, 2024
13
3
3
Got it, so if I were to get two cards and connect all 4 cables what you referred to as Multipath, would that effectively double the bandwidth available for the drives?

Would that help avoidong bottlenecks from the pcie3 8x?
 

Chriggel

Member
Mar 30, 2024
84
40
18
It would not double the bandwidth. Multipathing and dual linking aren't the same thing.

I don't know the specific part, but my guess would be that each connector pair connects to one port of a dual port SAS topology. If you're using SATA then that's already irrelevant, because SATA only has one port. That's the multipath aspect of the thing. This would be for redundancy, not for speed.

The fact that the connectors come in pairs is for dual linking the expander to the HBA. Like nerox said, imagine a network switch. If you need to stack it with another switch, you can use one cable and it will work, you get access from all ports on both switches to every other port on both switches. But you can use two cables for double the bandwidth. In the same manner, expanders support dual linking to the HBA, using the second connection for increased bandwidth.

Since your 6G SATA disks will be closer to ~2G of real world speed in the best case scenario, you're only looking at a combined throughput of 48Gbps across the entire backplane. That's the bandwidth of 2x4x6G on a 8 port HBA and less than the ~60Gbps limit of the PCIe 3.0 x8 slot.
 

nexox

Well-Known Member
May 3, 2023
712
298
63
Two cards means two x8 slots so more PCIe bandwidth, I haven't used that multipath topology before so I'm not quite sure what would be required to get more bandwidth, but it's theoretically possible to exceed the single slot bandwidth limit. Note also that both SAS and PCIe are full duplex, so if you were doing something like copying a file from a striped array on half the disks to an array on the other half of the disks you could get the full 64Gbps with a single HBA.

@Chriggel in this case the topology would be two HBAs linked to a single expander, since the backplane doesn't connect the second port even for SAS drives that support it. This primarily gets you tolerance to a (super rare) HBA failure, but I imagine some operating systems might allow you to configure access to some drives via one path and some via the other path, increasing total bandwidth. I'm pretty sure you can do something similar with dual port systems as well, but I haven't quite gotten around to testing it.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,118
1,534
113
Don't worry about all this multipathing or topology stuff. All you need is a single 9300-8i HBA. Run 2 cables from that to the backplane. You will have 96gbit of bandwidth (12GB/s) shared between all the drives, which is more than 24 hard drives can do. Anything beyond that is going to be a complex setup that will give you nothing but headaches.
 
  • Like
Reactions: nexox

nexox

Well-Known Member
May 3, 2023
712
298
63
In any case, if you're in the US, this is cheap and will get you started (after maybe a firmware upgrade,) and if you run two then even if one does fail that won't cause any downtime (until you need to reboot to replace): Inspur LSI 9300-8i Raid Card 12Gbps HBA HDD Controller High Profile IT MODE | eBay

Somewhere on this forum is a list of model numbers and brand names of other HBAs with the same LSI 3008 chipset which will also work and maybe be built better than the Inspur.
 

nabsltd

Well-Known Member
Jan 26, 2022
442
299
63
Got it, so if I were to get two cards and connect all 4 cables what you referred to as Multipath, would that effectively double the bandwidth available for the drives?
That's a single-expander backplane, so the two ports nearest the right side of the board (looking from the back of the case) are "input" and the other two are "output" for cascading to another backplane. You can also connect a breakout cable to one of the output ports. I do this to connect the 2.5" drives in the rear carrier.
 

Khar00f

New Member
Apr 28, 2024
13
3
3
Good to know, thx for the info. I'll keep that in mind, my 2.5" SSD have two trays in the back of the case and they connect to the motherboard for the time being.

I was actually able to source this card locally from someone who was willing it (I'm not known for my patience when it comes to deliveries).

SUPERMICRO AOC-S3008L-L8E which is supposed to be the same as an LSI 9300-8I

I read these cards only come in TI mode by default, so I won't need to flash it.

Gonna try it tonight and will update you guys on the status.

Thank for the help.