performance with HGST HUSMM8080ASS204 not what i was expecting?

Discussion in 'Hard Drives and Solid State Drives' started by BLinux, Nov 5, 2017.

  1. BLinux

    BLinux cat lover server enthusiast

    Joined:
    Jul 7, 2016
    Messages:
    2,359
    Likes Received:
    824
    I recently got 10x HP 800GB HSSC0800S5xnNMLC SSDs, as I understand these are the same as HGST HUSMM8080ASS204, but perhaps with HP firmware? I'm pairing them with 2 Adaptec 1000-8i HBAs in a Supermicro 216 chassis with 216A backplane (direct attached). The Adaptec card is showing these SSDs connecting at 12Gbps speeds. They are in a X8DT6-F machine, so not PCI-E 3.0, but 2.0 rather, even so, with PCI-E 2.0 x8, the theoretical bandwidth is about 4GB/sec, right? That's the hardware setup; OS is CentOS7 Linux; just a test setup.

    If i pick any single SSD and just create a file system on it, a simple 'dd' test for sequential write/read is showing me about 800MB/sec write, and 500MB/sec read? So, first thing, I'm immediately surprised the reads are slower than the writes? And this seems to be far below the HGST specs? At least the 800MB/sec confirms I am in the 12Gbps zone.

    So, next I use ZFS and put the 10x SSDs in raidz2 just to see what i get... 1.2GB/sec writes, 1.1GB/sec reads. 2 things still bother me here... why are reads still slower than writes? And with 10x SSDs that each can do at least 800MB/sec writes, why is the raidz2 aggregate only 1.2GB/sec? I'm not expect 8x 800MB/sec writes, since I know the PCI-E 2.0 x8 limits me to 4GB/sec, but i'm not even close? Earlier, before the test in this paragraph, i think i was bottle-necking at the CPUs with a pair of E5506 Xeons, as I was getting 650MB/sec writes, and 550MB/sec reads in the same raidz2 set; then I swapped in a pair of X5670 CPUs and that got me the 1.2GB/sec writes, 1.1GB/sec reads. obviously huge difference between the pair of E5506 and X5670; is the raidz2 parity intense enough to require more compute i/o than the E5506 can handle for 10x 12Gbps SSDs?

    so, thinking "let me take the parity thing out of the equation here..", I put all 10x SSDs in a ZFS vdev stripe pool. Same 'dd' test got me 1.8GB/sec writes and 1.8GB/sec reads. So, about 50% faster than raidz2 on the same set of 10x SSDs. that huge of a difference makes me wonder if the raidz2 parity is really that compute or i/o intense? and yet, even without parity, I'm not seeing even half of the 4GB/sec PCI-E 2.0 x8 limit?

    and before anyone asks, these SSDs report 4K sectors, and ZoL correctly used ashift=12.

    so, question is, why am I not seeing bigger numbers? any hints or suggestions?
     
    #1
  2. marcoi

    marcoi Well-Known Member

    Joined:
    Apr 6, 2013
    Messages:
    1,334
    Likes Received:
    205
    Raid 0 test is what I would have done as well. Can you try testing two disks only and disconnect the rest?
    Start with two disks only raid 0
    Then go to 3 and test
    then 4 and test, etc..

    I wondering if there is a wonky drive in you set that might be causing some of the issues. The other thing this might test is if your card to backplane to drives might be causing some problems.

    more of troubleshooting to pinpoint potential problem vs why you are getting the speeds.
     
    #2
  3. MBastian

    MBastian Member

    Joined:
    Jul 17, 2016
    Messages:
    49
    Likes Received:
    10
    I second that. Also check with lspci -vv if both controllers are really at PCI 2.0 x8. Lastly it could be that the two controllers are not on on the same CPU ... but QPI link capacity should be more than enough(?)
     
    #3
  4. MiniKnight

    MiniKnight Well-Known Member

    Joined:
    Mar 30, 2012
    Messages:
    2,952
    Likes Received:
    863
    If you're using X8 gear, I believe that's chipset PCIE not even CPU PCIE. Back then, everything went into a northbridge. E5 is when intel made PCIE go CPU. Here's a Fujitsu block diagram
    [​IMG]

    If your drives are linking at 12g and you're seeing that low of numbers on a hba sequential, then you might be hitting an x8 limitation.

    In that era, storage was still mostly disk and a 10g nic was a lot.
     
    #4
  5. KioskAdmin

    KioskAdmin Active Member

    Joined:
    Jan 20, 2015
    Messages:
    156
    Likes Received:
    32
    I could see it being the older system. These cards I've seen do well over what your getting on each card.

    The write speed being faster means that there's some ram caching somewhere. OS or in the drive.
     
    #5
  6. BLinux

    BLinux cat lover server enthusiast

    Joined:
    Jul 7, 2016
    Messages:
    2,359
    Likes Received:
    824
    yeah, good point... could be one drive flaky drive causing an issue... although, I did randomly test each of the drives individually... which still got me those strange numbers where writes > reads?

    as stated below, this older system PCI-E lanes are attached to Intel 5520.

    Even at PCI-E 2.0 x8, I should be able to see something closer to 4GB/s, no? I'm not even half way there? If I was stuck at PCI-E 2.0 x4, then I can see ~2GB/s being my limit.

    Now, I just checked the X8DT6-F manual, and I'm sure I've got both Adaptec 1000-8i cards in the PCI-E 2.0 x8 slots ; SLOT 7 and SLOT 5.

    I believe the cards can do more for sure, even in PCI-E 2.0 x8, they should do more than what i'm currently seeing... I don't think 5520 platform is so old that it can't handle >2~3GB/s... it's not THAT old?

    still not sure what it is... but like me, you guys think I should see larger numbers given the number and model of SSD and controller, right?
     
    #6
  7. i386

    i386 Well-Known Member

    Joined:
    Mar 18, 2016
    Messages:
    1,683
    Likes Received:
    412
    Is dd multithreaded?

    I have a few ssds and can't get close to the specs without multithreaded applications or benchmarks.
     
    #7
  8. BLinux

    BLinux cat lover server enthusiast

    Joined:
    Jul 7, 2016
    Messages:
    2,359
    Likes Received:
    824
    i'm pretty sure it is not and that's a good point as well... not sure if a single core running 'dd' can generate enough throughput - i did pick rather large block sizes (16M) with dd to go a bit faster. i was just doing preliminary testing of this equipment i just got; but i guess maybe I need to use iozone or bonnie++...
     
    #8
  9. funkywizard

    funkywizard mmm.... bandwidth.

    Joined:
    Jan 15, 2017
    Messages:
    607
    Likes Received:
    253
    dd uses a lot of cpu at 1GB/s and higher. Even if the drives can handle more speed you'll be limited by single core cpu performance.
     
    #9
  10. T_Minus

    T_Minus Moderator

    Joined:
    Feb 15, 2015
    Messages:
    6,838
    Likes Received:
    1,493
    Anyway you can test on newer hardware?
     
    #10
  11. BLinux

    BLinux cat lover server enthusiast

    Joined:
    Jul 7, 2016
    Messages:
    2,359
    Likes Received:
    824
    yeah, the next time I have time to play with this, i'm going to try bonnie++ or iozone to test with multiple threads. i guess I should also take a closer look to see if a single core is pegged out during the test.
    no, not right now, don't have anything for that. i was thinking of getting the X9DRD-7LN4F from this deal (SuperMicro X9DRD-7LN4F), for this purpose, but that seller canceled my bid and i'm sort of glad I'm not dealing with them. when i have time to look at this again, i'm going to take a closer look at the core utilization and also try out bonnie++ or iozone with multiple threads. I would like to see if this Westmere platform can push 2~4GB/sec at least. At some point, when I get newer hardware, I'll swap out the motherboard/CPU.
     
    #11
Similar Threads: performance HGST
Forum Title Date
Hard Drives and Solid State Drives NVMe performance on IBM x3650 M3 Aug 13, 2019
Hard Drives and Solid State Drives Low 4K read performance on all (3) ssds Mar 14, 2019
Hard Drives and Solid State Drives Slow write sync performance on Intel S3700 Jan 25, 2019
Hard Drives and Solid State Drives performance Samsung SV843 / SM863a Jan 11, 2019
Hard Drives and Solid State Drives High performance DAS - NVMe or SSD May 8, 2018

Share This Page