performance with HGST HUSMM8080ASS204 not what i was expecting?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
I recently got 10x HP 800GB HSSC0800S5xnNMLC SSDs, as I understand these are the same as HGST HUSMM8080ASS204, but perhaps with HP firmware? I'm pairing them with 2 Adaptec 1000-8i HBAs in a Supermicro 216 chassis with 216A backplane (direct attached). The Adaptec card is showing these SSDs connecting at 12Gbps speeds. They are in a X8DT6-F machine, so not PCI-E 3.0, but 2.0 rather, even so, with PCI-E 2.0 x8, the theoretical bandwidth is about 4GB/sec, right? That's the hardware setup; OS is CentOS7 Linux; just a test setup.

If i pick any single SSD and just create a file system on it, a simple 'dd' test for sequential write/read is showing me about 800MB/sec write, and 500MB/sec read? So, first thing, I'm immediately surprised the reads are slower than the writes? And this seems to be far below the HGST specs? At least the 800MB/sec confirms I am in the 12Gbps zone.

So, next I use ZFS and put the 10x SSDs in raidz2 just to see what i get... 1.2GB/sec writes, 1.1GB/sec reads. 2 things still bother me here... why are reads still slower than writes? And with 10x SSDs that each can do at least 800MB/sec writes, why is the raidz2 aggregate only 1.2GB/sec? I'm not expect 8x 800MB/sec writes, since I know the PCI-E 2.0 x8 limits me to 4GB/sec, but i'm not even close? Earlier, before the test in this paragraph, i think i was bottle-necking at the CPUs with a pair of E5506 Xeons, as I was getting 650MB/sec writes, and 550MB/sec reads in the same raidz2 set; then I swapped in a pair of X5670 CPUs and that got me the 1.2GB/sec writes, 1.1GB/sec reads. obviously huge difference between the pair of E5506 and X5670; is the raidz2 parity intense enough to require more compute i/o than the E5506 can handle for 10x 12Gbps SSDs?

so, thinking "let me take the parity thing out of the equation here..", I put all 10x SSDs in a ZFS vdev stripe pool. Same 'dd' test got me 1.8GB/sec writes and 1.8GB/sec reads. So, about 50% faster than raidz2 on the same set of 10x SSDs. that huge of a difference makes me wonder if the raidz2 parity is really that compute or i/o intense? and yet, even without parity, I'm not seeing even half of the 4GB/sec PCI-E 2.0 x8 limit?

and before anyone asks, these SSDs report 4K sectors, and ZoL correctly used ashift=12.

so, question is, why am I not seeing bigger numbers? any hints or suggestions?
 

marcoi

Well-Known Member
Apr 6, 2013
1,532
288
83
Gotha Florida
Raid 0 test is what I would have done as well. Can you try testing two disks only and disconnect the rest?
Start with two disks only raid 0
Then go to 3 and test
then 4 and test, etc..

I wondering if there is a wonky drive in you set that might be causing some of the issues. The other thing this might test is if your card to backplane to drives might be causing some problems.

more of troubleshooting to pinpoint potential problem vs why you are getting the speeds.
 

MBastian

Active Member
Jul 17, 2016
205
59
28
Düsseldorf, Germany
Raid 0 test is what I would have done as well. Can you try testing two disks only and disconnect the rest?
Start with two disks only raid 0
Then go to 3 and test
then 4 and test, etc..
I second that. Also check with lspci -vv if both controllers are really at PCI 2.0 x8. Lastly it could be that the two controllers are not on on the same CPU ... but QPI link capacity should be more than enough(?)
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
If you're using X8 gear, I believe that's chipset PCIE not even CPU PCIE. Back then, everything went into a northbridge. E5 is when intel made PCIE go CPU. Here's a Fujitsu block diagram


If your drives are linking at 12g and you're seeing that low of numbers on a hba sequential, then you might be hitting an x8 limitation.

In that era, storage was still mostly disk and a 10g nic was a lot.
 

KioskAdmin

Active Member
Jan 20, 2015
156
32
28
53
I could see it being the older system. These cards I've seen do well over what your getting on each card.

The write speed being faster means that there's some ram caching somewhere. OS or in the drive.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
Raid 0 test is what I would have done as well. Can you try testing two disks only and disconnect the rest?
Start with two disks only raid 0
Then go to 3 and test
then 4 and test, etc..

I wondering if there is a wonky drive in you set that might be causing some of the issues. The other thing this might test is if your card to backplane to drives might be causing some problems.

more of troubleshooting to pinpoint potential problem vs why you are getting the speeds.
yeah, good point... could be one drive flaky drive causing an issue... although, I did randomly test each of the drives individually... which still got me those strange numbers where writes > reads?

I second that. Also check with lspci -vv if both controllers are really at PCI 2.0 x8. Lastly it could be that the two controllers are not on on the same CPU ... but QPI link capacity should be more than enough(?)
as stated below, this older system PCI-E lanes are attached to Intel 5520.

If you're using X8 gear, I believe that's chipset PCIE not even CPU PCIE. Back then, everything went into a northbridge. E5 is when intel made PCIE go CPU. Here's a Fujitsu block diagram


If your drives are linking at 12g and you're seeing that low of numbers on a hba sequential, then you might be hitting an x8 limitation.

In that era, storage was still mostly disk and a 10g nic was a lot.
Even at PCI-E 2.0 x8, I should be able to see something closer to 4GB/s, no? I'm not even half way there? If I was stuck at PCI-E 2.0 x4, then I can see ~2GB/s being my limit.

Now, I just checked the X8DT6-F manual, and I'm sure I've got both Adaptec 1000-8i cards in the PCI-E 2.0 x8 slots ; SLOT 7 and SLOT 5.

I could see it being the older system. These cards I've seen do well over what your getting on each card.

The write speed being faster means that there's some ram caching somewhere. OS or in the drive.
I believe the cards can do more for sure, even in PCI-E 2.0 x8, they should do more than what i'm currently seeing... I don't think 5520 platform is so old that it can't handle >2~3GB/s... it's not THAT old?

still not sure what it is... but like me, you guys think I should see larger numbers given the number and model of SSD and controller, right?
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
Is dd multithreaded?

I have a few ssds and can't get close to the specs without multithreaded applications or benchmarks.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
Is dd multithreaded?

I have a few ssds and can't get close to the specs without multithreaded applications or benchmarks.
i'm pretty sure it is not and that's a good point as well... not sure if a single core running 'dd' can generate enough throughput - i did pick rather large block sizes (16M) with dd to go a bit faster. i was just doing preliminary testing of this equipment i just got; but i guess maybe I need to use iozone or bonnie++...
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,669
1,081
113
artofserver.com
dd uses a lot of cpu at 1GB/s and higher. Even if the drives can handle more speed you'll be limited by single core cpu performance.
yeah, the next time I have time to play with this, i'm going to try bonnie++ or iozone to test with multiple threads. i guess I should also take a closer look to see if a single core is pegged out during the test.
Anyway you can test on newer hardware?
no, not right now, don't have anything for that. i was thinking of getting the X9DRD-7LN4F from this deal (SuperMicro X9DRD-7LN4F), for this purpose, but that seller canceled my bid and i'm sort of glad I'm not dealing with them. when i have time to look at this again, i'm going to take a closer look at the core utilization and also try out bonnie++ or iozone with multiple threads. I would like to see if this Westmere platform can push 2~4GB/sec at least. At some point, when I get newer hardware, I'll swap out the motherboard/CPU.