ZFS Mirrored NVMe + Debian / LXC - Slower than expected

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I was running some sysbench OLTP tests on a dual E5-2699 V3 server. I saw some extremely strange results. These drives were coping images via DD about 3x as fast as the S3700 + SanDisk LB406s (400GB SLC) drives ZFS mirrors in the Las Vegas colo.

Setup:
  • Proxmox VE 4.0 (Debian Jessie)
  • SSDs set as ZoL mirrored SSD pairs
  • Stock drivers
  • Dual Intel Xeon E5-2699 V3 server
  • LXC containers with Ubuntu 14.04 using the ZFS mirrored pairs of drives as storage 8 cores dedicated
  • Sysbench OLTP tests against MySQL 5.6
  • Containers were run using the exact same test setups
The drives used were:
  • 2x Intel DC S3710 200GB
  • 2x Intel DC P3600 400GB
  • 2x Samsung XS1715 800GB
Here is what I saw:
upload_2015-11-21_21-14-47.png
Seems like there is something going on with the NVMe drives.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
That blows, looks like something is going on there.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Well, trying it on the ceph storage now just to see what happens.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
And now we enter the completely bizzare sysbench result:
upload_2015-11-22_17-13-40.png

Ceph had one request I saw in the 195ms range too!
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
I'd like to know what you're running to get that. Maybe we can all pitch in a VM's worth of data or two?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I'd like to know what you're running to get that. Maybe we can all pitch in a VM's worth of data or two?
Might be worth a shot. But just as a sanity check I tested a Samsung XS1715 alone using:
Code:
hdparm -tT --direct /dev/nvme0n1
and got:
Code:
root@fmt-pve-01:/home/test# hdparm -tT --direct /dev/nvme0n1

/dev/nvme0n1:

 Timing O_DIRECT cached reads:   5204 MB in  2.00 seconds = 2602.12 MB/sec
 Timing O_DIRECT disk reads: 6718 MB in  3.00 seconds = 2239.10 MB/sec
and also the Intel DC P3600:

Code:
# hdparm -tT --direct /dev/nvme2n1

/dev/nvme2n1:
 Timing O_DIRECT cached reads:   3256 MB in  2.00 seconds = 1628.05 MB/sec
 Timing O_DIRECT disk reads: 4604 MB in  3.00 seconds = 1534.54 MB/sec
Here is a SAS2 Samsung SM1625 200GB:
Code:
# hdparm -tT --direct /dev/sdf

/dev/sdf:
 Timing O_DIRECT cached reads:   1032 MB in  2.00 seconds = 515.12 MB/sec
 Timing O_DIRECT disk reads: 1346 MB in  3.00 seconds = 448.37 MB/sec
And a SATA III Intel S3710 200GB:
Code:
# hdparm -tT --direct /dev/sdb

/dev/sdb:
 Timing O_DIRECT cached reads:   850 MB in  2.00 seconds = 424.21 MB/sec
 Timing O_DIRECT disk reads: 1426 MB in  3.00 seconds = 475.01 MB/sec
Raw read speeds seem to favor the NVMe drives clearly. If it were the ZFS mirror, I would expect the S3710's to experience similarly poor results if not worse due to the underlying hardware.