maximum raid5 stripes using an SFF-8088 (4 lanes) controller

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gs2023

New Member
Dec 22, 2023
5
0
1
Italy
blog.sguazz.it
Hi, I am new to PCIe and SATA details and I would like to know if the number of raid stripes should match the SAS/SATA cable lines used by my controller.

My machine is a DELL PowerEdge T40 with Debian GNU/Linux and an LSI controller 9207-8e plugged on a PCIe 3.0 slot connected to the PCH and flashed in IT mode. The controller uses a SAS2308 chipset and is connected to an external enclosure with a single SFF-8088 cable.
On the enclosure we may have up to 24 disks and I now wonder the kind of RAID I should setup with linux multidisk raid5 module.

On a single raid5 multidisk, should I use as many disks as possible or should I limit them to 4 since the SFF-8088 cable has 4 lines? From how many disks at the same time may this controller read?
Does this change if I use slower or faster disks? Does the SATA protocol multiplexes the lines for communicating to other disks while waiting for a reply?

Thank you,
Giuseppe
 

i386

Well-Known Member
Mar 18, 2016
4,251
1,548
113
34
Germany
How large are the disks? Is this for testing?
I would use raid 5 (or 0) only for testing or prototyping, everything else would feel unsafe to me with that many hdds ._.

According to an older specsheet the hba supports "up to 24 sas dualport devices", so 24. It doesn't make a difference if you use "slower" or "faster" disks or ssds with the hba.
The hosts talks scsi to the hba, the hba* talks sata to the sata devices.

*a hba is a "dumb" device compared to a raid controller, but it is still "smart enough" to recognize sas or sata devices and use the correct protocols for communication
 

gs2023

New Member
Dec 22, 2023
5
0
1
Italy
blog.sguazz.it
How large are the disks? Is this for testing?
Yes, this is a testing environment, but I am going to store on that disks a few daily backups after testing. The disks I curently have are 1tb 5400rpm disks and I need as much space as possibile. Of 24 disks, I would create a single raid 5 with 20 disks and leave the 4 remaning as spares.

But, it the 4 lanes SFF-8088 limit port limits the communication, I might change the layout to 5 raid5 multidisks, each with 4 active disks, and a have a pool of 4 disks to be used as shared spares.

In all cases, I am going to use the multidisk(s) as physical disk(s) for LVM.
 

nexox

Well-Known Member
May 3, 2023
700
289
63
It sounds like your enclosure has a SAS expander in it, those usually share the four lane uplink bandwidth across all drives when you access more than four concurrently, though it's always a good idea to test this out with your particular hardware configuration. You should probably just choose your array layout based on the volume sizes and failure tolerance you want.
 

gs2023

New Member
Dec 22, 2023
5
0
1
Italy
blog.sguazz.it
Yes, this enclosure has a second external port for an expander.
I am still waiting for the delivery, but I will try to test all possibile raid configurations.
 

nexox

Well-Known Member
May 3, 2023
700
289
63
Testing all possible raid configurations is probably more effort than needed, the basic benchmark to see how multiple drives work would be to run a sequential workload on one drive, see how fast it goes, then add another drive, record the per-disk and per-enclosure read rate, then keep adding drives and writing down the performance. You should see overall read speed increase as you add drives and then level off when you hit the limit of the 4x link if everything is working the way it's supposed to.
 

MrGuvernment

Member
Nov 16, 2020
39
7
8
You do not want that many drives in raid5, and use it for backups. Min raid 6 or raid 10. Or JBOD them and use ZFS in linux..?
 

gs2023

New Member
Dec 22, 2023
5
0
1
Italy
blog.sguazz.it
Yes, I was thinking about having some sort of redundancy with more than one disk. My preferred solution would probably be a raid6 managed directly by LVM, in fact I had experience with XFS not taking correctly into account the number of stripes when using a raid5 managed my MDADM. I planned to check if using LVM instad of MDADM, XFS will be aware of the number of stripes.