LSI 9207-8i, Expander and 24 SSDs (Beginner Confusion)

Discussion in 'RAID Controllers and Host Bus Adapters' started by nyxynyx, Mar 24, 2020.

  1. nyxynyx

    nyxynyx New Member

    Joined:
    Mar 21, 2020
    Messages:
    6
    Likes Received:
    0
    I have an LSI 9207-8i card (2 ports) connected to an expander (Supermicro SAS-216EL, 3 ports) via a single mini-SAS cable, which is connected to a 24-drive backplane (Supermicro BPN-SAS-216EB, able to hold a 2nd expander). There should eventually be 24 SATA3 SSDs connected to this backplane.

    Without changing the backplane, what can we do to increase the performance of this system?

    Will it help to:

    1. Add another mini-SAS cable to connect the 2nd port of the LSI HBA to another port on the expander? (2 cables connecting HBA to expander)
    2. Add a second LSI 9207-8i and connect both HBA to the same expander?
    3. Add a second expander to the backplane and connect the first (and only) LSI HBA to both expanders?
    4. Add a second expander and a second LSI 9207-8i HBA, where each HBA is connected to both expanders?

    Thank you for your help!

    EDIT: Corrected backplane model to BPN-SAS-216EB (Thanks Rand_!)
     
    #1
    Last edited: Mar 25, 2020 at 12:48 PM
  2. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    4,020
    Likes Received:
    692
    Are you sure the name of the Backplane is correct?
    It sounds more like you have a BPN-SAS2-216EL (not a -A).

    Question is what your limitation is - bandwith (as in sequential reads with large blocks) or something else?
    Adding more lanes (cables) will increase the bandwith but not necessarily IOPS.

    Very much will be depending on your disk layout which you don't mention and the number of concurrent accesses you have (increasing used bandwith)

    Further options -
    Get a SAS3 backplane and HBA - this will double the available bandwith
    Get a -A backplane and even more HBAs [or ports to be precise - 6 ports for 24 disks)

    Re your questions there are ppl with more experience on expanders that can answer those with more than just guesses as I'd do :)
     
    #2
    nyxynyx likes this.
  3. nyxynyx

    nyxynyx New Member

    Joined:
    Mar 21, 2020
    Messages:
    6
    Likes Received:
    0

    Yes, you are correct, my current backplane is not -A. Seems like I need seriously consider getting a BPN-SAS-216A/BPN-SAS3-216A backplane to replace the current one, otherwise the SSD IOPS will be severely limited.

    I'm planning to use the system for a PostgreSQL database that is both read and write heavy (unfortunately), so I think increasing the IOPS will be more beneficial and that its not currently bottlenecked by a lack of bandwidth. Database read operations need to be as fast as possible.

    The current disk layout consists of 3 mirror vdevs (2 SSD per mirror vdev) striped together using ZFS.
     
    #3
  4. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    4,020
    Likes Received:
    692
    IOPS are not necessarily limited by current setup - depending on block size you use.

    Run a fio test with the relevant blocksizes and user (# of jobs / queue depth), and check total speed. If you hit 24 Gbs (4 lanes a 6Gbs) i.e. 2400 MB/s [quite unlikely] then you are bandwith limited...
     
    #4
    nyxynyx likes this.
  5. nyxynyx

    nyxynyx New Member

    Joined:
    Mar 21, 2020
    Messages:
    6
    Likes Received:
    0
    Ran the `fio` tests as suggested, comparing the 3x mirror vdev on expander with an NVMe SSD attached directly on the motherboard (not fair, but the easiest other test to run). ZFS LZ4 compression is enabled on the SATA SSD, while the NVMe drive is not using ZFS, so this is even further from being a half decent comparison. All tests used fio parameters `--bs=4k --iodepth=128 --size=10G`.

    How will you interpret the results?

    Does it also seem like we can add 3 more mirrored vdev before hitting the bandwidth limit ? Or are the SSDs already not performing well due to having an expander between the backplane and the HBA?

    Reads
    NVMe
    read: IOPS=323k, BW=1262MiB/s (1324MB/s)(4096MiB/3245msec)

    ZFS SSD (3 x 2 mirror vdev)
    read: IOPS=334k, BW=1305MiB/s (1368MB/s)(10.0GiB/7847msec)​

    Random Reads
    NVMe
    read: IOPS=314k, BW=1228MiB/s (1287MB/s)(4096MiB/3336msec)

    ZFS SSD (3 x 2 mirror vdev)
    read: IOPS=52.8k, BW=206MiB/s (216MB/s)(10.0GiB/49687msec)​

    Writes
    NVMe
    write: IOPS=150k, BW=584MiB/s (613MB/s)(10.0GiB/17525msec);

    ZFS SSD(3 x 2 mirror vdev)
    write: IOPS=135k, BW=529MiB/s (555MB/s)(10.0GiB/19357msec);​

    Random Writes
    NVMe
    write: IOPS=78.9k, BW=308MiB/s (323MB/s)(10.0GiB/33214msec)

    ZFS SSD (3 x 2 mirror vdev)
    write: IOPS=15.4k, BW=60.1MiB/s (62.0MB/s)(10.0GiB/170452msec);​
     
    #5
    Last edited: Mar 25, 2020 at 1:01 PM
  6. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    4,020
    Likes Received:
    692
    A couple of issues with this
    1. 10G is probably less than your memory so a significant portion might be handled by cache
    2. are you sure iodepth is a realistic measurement to use for your use case?
    See eg What exactly is iodepth in fio?
    3. Depending on user# it might be more realistic to set users (= jobs/threads to whatever and qd to sth more realistic)- will depend on application in the end
    4. SATA is the worst option for deeper queues as its usually limited to 32 commands at a time (vs SAS or NVMe (up to 4k iirc)).
    5. There is not only bandwith but also latency to consider, especially for databases and interactive queries
    6. If the test is realistic then you probably can add another 2 or 3 mirror pairs before hitting read limits. You then need to decide if that is a valid scenario (do users read large amount of sorted data?) How does Postgres store data ...
    7. Its not BPN-SAS-216EB, its BPN-SAS2-216E1 (to be precise;))
     
    #6
    nyxynyx likes this.
  7. nyxynyx

    nyxynyx New Member

    Joined:
    Mar 21, 2020
    Messages:
    6
    Likes Received:
    0
    Thank you for your insight and corrections :)

    If the system has 128 GB of memory, how large will you increase the 10 GB size to in order to make the benchmark more representative?
     
    #7
  8. Rand__

    Rand__ Well-Known Member

    Joined:
    Mar 6, 2014
    Messages:
    4,020
    Likes Received:
    692
    Well easiest is to remove memory or limit in software - there are some ZFS ARC settings you can do but atm I can't remember which govern the write cache, sorry; also the exact label will depend on OS in use.

    You also can go large of/c
     
    #8
Similar Threads: 9207-8i Expander
Forum Title Date
RAID Controllers and Host Bus Adapters Conflict Between 9207-8i and 9286CV-8e BIOS Feb 28, 2020
RAID Controllers and Host Bus Adapters SSD Drives on LSI 9207-8i vs. on-board SATA2 help Feb 16, 2020
RAID Controllers and Host Bus Adapters Need help with my LSI SAS 9207-8i Feb 6, 2020
RAID Controllers and Host Bus Adapters How to buy non-counterfeit LSI 9207-8i ? Nov 2, 2019
RAID Controllers and Host Bus Adapters Dell r720 w/LSI SAS 9207-8i HBA boot issue May 7, 2019

Share This Page