LSI 9207-8i, Expander and 24 SSDs (Beginner Confusion)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nyxynyx

New Member
Mar 21, 2020
7
0
1
I have an LSI 9207-8i card (2 ports) connected to an expander (Supermicro SAS-216EL, 3 ports) via a single mini-SAS cable, which is connected to a 24-drive backplane (Supermicro BPN-SAS-216EB, able to hold a 2nd expander). There should eventually be 24 SATA3 SSDs connected to this backplane.

Without changing the backplane, what can we do to increase the performance of this system?

Will it help to:

1. Add another mini-SAS cable to connect the 2nd port of the LSI HBA to another port on the expander? (2 cables connecting HBA to expander)
2. Add a second LSI 9207-8i and connect both HBA to the same expander?
3. Add a second expander to the backplane and connect the first (and only) LSI HBA to both expanders?
4. Add a second expander and a second LSI 9207-8i HBA, where each HBA is connected to both expanders?

Thank you for your help!

EDIT: Corrected backplane model to BPN-SAS-216EB (Thanks Rand_!)
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Are you sure the name of the Backplane is correct?
It sounds more like you have a BPN-SAS2-216EL (not a -A).

Question is what your limitation is - bandwith (as in sequential reads with large blocks) or something else?
Adding more lanes (cables) will increase the bandwith but not necessarily IOPS.

Very much will be depending on your disk layout which you don't mention and the number of concurrent accesses you have (increasing used bandwith)

Further options -
Get a SAS3 backplane and HBA - this will double the available bandwith
Get a -A backplane and even more HBAs [or ports to be precise - 6 ports for 24 disks)

Re your questions there are ppl with more experience on expanders that can answer those with more than just guesses as I'd do :)
 
  • Like
Reactions: nyxynyx

nyxynyx

New Member
Mar 21, 2020
7
0
1
Are you sure the name of the Backplane is correct?
It sounds more like you have a BPN-SAS2-216EL (not a -A).

Question is what your limitation is - bandwith (as in sequential reads with large blocks) or something else?
Adding more lanes (cables) will increase the bandwith but not necessarily IOPS.

Very much will be depending on your disk layout which you don't mention and the number of concurrent accesses you have (increasing used bandwith)

Further options -
Get a SAS3 backplane and HBA - this will double the available bandwith
Get a -A backplane and even more HBAs [or ports to be precise - 6 ports for 24 disks)

Re your questions there are ppl with more experience on expanders that can answer those with more than just guesses as I'd do :)

Yes, you are correct, my current backplane is not -A. Seems like I need seriously consider getting a BPN-SAS-216A/BPN-SAS3-216A backplane to replace the current one, otherwise the SSD IOPS will be severely limited.

I'm planning to use the system for a PostgreSQL database that is both read and write heavy (unfortunately), so I think increasing the IOPS will be more beneficial and that its not currently bottlenecked by a lack of bandwidth. Database read operations need to be as fast as possible.

The current disk layout consists of 3 mirror vdevs (2 SSD per mirror vdev) striped together using ZFS.
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
IOPS are not necessarily limited by current setup - depending on block size you use.

Run a fio test with the relevant blocksizes and user (# of jobs / queue depth), and check total speed. If you hit 24 Gbs (4 lanes a 6Gbs) i.e. 2400 MB/s [quite unlikely] then you are bandwith limited...
 
  • Like
Reactions: nyxynyx

nyxynyx

New Member
Mar 21, 2020
7
0
1
IOPS are not necessarily limited by current setup - depending on block size you use.

Run a fio test with the relevant blocksizes and user (# of jobs / queue depth), and check total speed. If you hit 24 Gbs (4 lanes a 6Gbs) i.e. 2400 MB/s [quite unlikely] then you are bandwith limited...
Ran the `fio` tests as suggested, comparing the 3x mirror vdev on expander with an NVMe SSD attached directly on the motherboard (not fair, but the easiest other test to run). ZFS LZ4 compression is enabled on the SATA SSD, while the NVMe drive is not using ZFS, so this is even further from being a half decent comparison. All tests used fio parameters `--bs=4k --iodepth=128 --size=10G`.

How will you interpret the results?

Does it also seem like we can add 3 more mirrored vdev before hitting the bandwidth limit ? Or are the SSDs already not performing well due to having an expander between the backplane and the HBA?

Reads
NVMe
read: IOPS=323k, BW=1262MiB/s (1324MB/s)(4096MiB/3245msec)

ZFS SSD (3 x 2 mirror vdev)
read: IOPS=334k, BW=1305MiB/s (1368MB/s)(10.0GiB/7847msec)​

Random Reads
NVMe
read: IOPS=314k, BW=1228MiB/s (1287MB/s)(4096MiB/3336msec)

ZFS SSD (3 x 2 mirror vdev)
read: IOPS=52.8k, BW=206MiB/s (216MB/s)(10.0GiB/49687msec)​

Writes
NVMe
write: IOPS=150k, BW=584MiB/s (613MB/s)(10.0GiB/17525msec);

ZFS SSD(3 x 2 mirror vdev)
write: IOPS=135k, BW=529MiB/s (555MB/s)(10.0GiB/19357msec);​

Random Writes
NVMe
write: IOPS=78.9k, BW=308MiB/s (323MB/s)(10.0GiB/33214msec)

ZFS SSD (3 x 2 mirror vdev)
write: IOPS=15.4k, BW=60.1MiB/s (62.0MB/s)(10.0GiB/170452msec);​
 
Last edited:

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
A couple of issues with this
1. 10G is probably less than your memory so a significant portion might be handled by cache
2. are you sure iodepth is a realistic measurement to use for your use case?
See eg What exactly is iodepth in fio?
3. Depending on user# it might be more realistic to set users (= jobs/threads to whatever and qd to sth more realistic)- will depend on application in the end
4. SATA is the worst option for deeper queues as its usually limited to 32 commands at a time (vs SAS or NVMe (up to 4k iirc)).
5. There is not only bandwith but also latency to consider, especially for databases and interactive queries
6. If the test is realistic then you probably can add another 2 or 3 mirror pairs before hitting read limits. You then need to decide if that is a valid scenario (do users read large amount of sorted data?) How does Postgres store data ...
7. Its not BPN-SAS-216EB, its BPN-SAS2-216E1 (to be precise;))
 
  • Like
Reactions: BLinux and nyxynyx

nyxynyx

New Member
Mar 21, 2020
7
0
1
A couple of issues with this
1. 10G is probably less than your memory so a significant portion might be handled by cache
2. are you sure iodepth is a realistic measurement to use for your use case?
See eg What exactly is iodepth in fio?
3. Depending on user# it might be more realistic to set users (= jobs/threads to whatever and qd to sth more realistic)- will depend on application in the end
4. SATA is the worst option for deeper queues as its usually limited to 32 commands at a time (vs SAS or NVMe (up to 4k iirc)).
5. There is not only bandwith but also latency to consider, especially for databases and interactive queries
6. If the test is realistic then you probably can add another 2 or 3 mirror pairs before hitting read limits. You then need to decide if that is a valid scenario (do users read large amount of sorted data?) How does Postgres store data ...
7. Its not BPN-SAS-216EB, its BPN-SAS2-216E1 (to be precise;))
Thank you for your insight and corrections :)

If the system has 128 GB of memory, how large will you increase the 10 GB size to in order to make the benchmark more representative?
 

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
Well easiest is to remove memory or limit in software - there are some ZFS ARC settings you can do but atm I can't remember which govern the write cache, sorry; also the exact label will depend on OS in use.

You also can go large of/c
 

mysy

Member
Apr 2, 2020
51
10
8
CHINA
If you have that much SSD for HBA , you should upgrade your HBA card to LSI 9300 or 9400 .LSI 9207 is SAS2308 with PCI_E Gen3 ,but it was release long time ago.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
If you have that much SSD for HBA , you should upgrade your HBA card to LSI 9300 or 9400 .LSI 9207 is SAS2308 with PCI_E Gen3 ,but it was release long time ago.
I don't think a SAS3 HBA is really going to help with his array of SATA SSDs... if he upgraded to SAS3 with SAS3 SSDs, then I'd agree with you...