Intel 750 raidZ in ZFS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vrod

Active Member
Jan 18, 2015
241
43
28
31
OK so I did some changes to the hardware. New 750 installed so total is on 5 now, got the qle8152 adapter swapped out with a dual port Mellanox Connect-X2 adapter. Ram has been bumped up to 256gb and another 2660v2 is in the box so I can use all 6 pcie slots.

Network issue has been solved by the adapter change. Now getting a full 9,40gbps with iperf, both ways, on both adapters. :) I have configured the 750-ssd array as a raidz, not raid0 as i originally planned. This might change later but for now I'm just in a test phase.

I did a single crystal disk mark test (will do more later with anvil and atto) and got some "ok" results. I am testing this in a ESXi 6.5 VM with iSCSI. I have changed the path selection of the LUN to round-robin to load-balance on both adapters. However, because I am running the newest patch of ESXi, my VM suffers from horrible disk performance from time to time. see more here if you also experience it: ESXi 6.5 Slow vms, High "average response ... |VMware Communities

Back on the topic, here's the crystal disk mark performance...


I saw seq read and write as high as 1,6GB/s but it looks like that the ESXi bug hits hard on especially the write performance.... So far so good, quite impressed with the raidz performance. :) currently just the 4 ssd's are in the pool, but i might redo it to include the 5th.
 

Attachments

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
Can you fix the vmware link? I'd be interested in that.
Values look slower than a single 750 locally attached;)

Hm google has the same 'Problem loading' result, so not your link
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
I guess the performance is lower because the data has to travel longer and through more layers than direct attach ssd's. :)

However they are rated for 900MB/s write and I'm getting almost 300 more... even though I shouldn't since it's raidz. And if not having the vmware bug means 1.6GB/s then I will be more than satisfied.

I need to do some more testing and tuning but so far so good
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
I have the 400gb models, they are rated for 2200MB/s read and 900MB/s write yes. :)

But as said, I still gotta do more testing
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
Ok so I installed one of my other c6220 nodes with a brand new esxi 6.5 dell image... it has the version before the disk bug so my VM is fine now.. Did an atto test and write maxes out the 20Gb/s line :D (round-robin mpio policy enabled).test2.PNG

Write about 2,3GB/s and read about 1,8GB/s. Not too bad in my opinion, I hope the addition of another ssd will bring me closer to the 2,3GB/s on the read speed. I did not expect the raidz to perform like this, especially not in the write segment. Also didn't think ZFS would be so good with NVMe ssd's but this proved otherwise. Everything is blazing fast now, especially after the NIC change. So far so good. :)