I'm currently evaluating the possibility to create a NAS/SAN (iSCSI/NFS) to share between 3 lab servers (all HP ML310e Gen8 v2) with an upcoming 10 Gbps network (ConnectX-2 NIC connected to a shared DLINK DGS-1510-28X). As I would like to have some services "always-on", my goal would be to keep VMware on the 4th server and run the NAS/SAN in a VM beside the virtual FW and AD. This "storage unit" would only be for VMware stuff as I already have some slow speed storage for everything else.
So, my first idea was to use ZFS as it allow the use of flash to increase the speed of read and write (L2ARC and ZIL) or, to go all-flash with just about any offering that would do iSCSI and NFS. Before even getting there, I wanted to do some testing on a single Samsung 850 EVO in various context. I read numerous time that performance to the disks is not affected a lot by the hypervisor but wanted to confirm how my setup is handling this. I started by finding a test that I could easily reproduce with this disk. I found a test on benchmarkreviews.com that was done with IOMeter on various disks and the EVO 850 was there so I decided to compare what they got (Kingston HyperX Savage SSD Benchmark Performance Review) with what I'm seeing.
On their testing with 100% random, 50/50 read/write, 4KB, 32QD, they are seeing about 86K IOPS on their Asus motherboard and i7-2600 CPU with Windows 7.
I downloaded their exact iometer icf file and run it on my ML310e which has 18GB of RAM, an i3-4150 and a HP H220 HBA (LSI-2308-IT mode) on Windows 2016 TP3. I'm getting just a little bit short of 42K IOPS after 120 seconds (same test as they do). I've done numerous tests with this EVO 850 in the last few weeks so it may not be optimal but it is kind of slow at half the speed.
Second step was to try this in a VM. I booted the exact same Windows in VMware (RDM is nice for this) and tried to add the SSD as a VMDK and as a RDM. In both case, I'm getting the same performance of about 26500 IOPS for the same test. I can confirm that VMDK and RDM seems to provide the same level of performance but... I can't say the same when comparing the same test in a VM vs on native hardware. 42K vs 26.5K IOPS is not the same. Latency and CPU usage are also increasing in a VM (1.2 ms vs .7 ms and 52% CPU vs 21% on native hardware). BTW, I'm on ESXi 6U1 and the H220 is officially supported by VMware and HP.
Is there any tuning to do at ESXi level to maximize the performance we can get from a SSD and a good HBA? I'm planning on installing the H220 in one of the ML310e that has a Xeon e3-1230v3 to pass the HBA to the OS and see the impact but I didn't wanted to dedicate a small Xeon for that task. I saw some QNAP and Synology that are able to nearly fill a 10 Gbps while using an Intel atom or celeron so there certainly something to do with a i3.
Any idea or comment? Don't hesitate to ask if you need more details. Thank you.
ehfortin
So, my first idea was to use ZFS as it allow the use of flash to increase the speed of read and write (L2ARC and ZIL) or, to go all-flash with just about any offering that would do iSCSI and NFS. Before even getting there, I wanted to do some testing on a single Samsung 850 EVO in various context. I read numerous time that performance to the disks is not affected a lot by the hypervisor but wanted to confirm how my setup is handling this. I started by finding a test that I could easily reproduce with this disk. I found a test on benchmarkreviews.com that was done with IOMeter on various disks and the EVO 850 was there so I decided to compare what they got (Kingston HyperX Savage SSD Benchmark Performance Review) with what I'm seeing.
On their testing with 100% random, 50/50 read/write, 4KB, 32QD, they are seeing about 86K IOPS on their Asus motherboard and i7-2600 CPU with Windows 7.
I downloaded their exact iometer icf file and run it on my ML310e which has 18GB of RAM, an i3-4150 and a HP H220 HBA (LSI-2308-IT mode) on Windows 2016 TP3. I'm getting just a little bit short of 42K IOPS after 120 seconds (same test as they do). I've done numerous tests with this EVO 850 in the last few weeks so it may not be optimal but it is kind of slow at half the speed.
Second step was to try this in a VM. I booted the exact same Windows in VMware (RDM is nice for this) and tried to add the SSD as a VMDK and as a RDM. In both case, I'm getting the same performance of about 26500 IOPS for the same test. I can confirm that VMDK and RDM seems to provide the same level of performance but... I can't say the same when comparing the same test in a VM vs on native hardware. 42K vs 26.5K IOPS is not the same. Latency and CPU usage are also increasing in a VM (1.2 ms vs .7 ms and 52% CPU vs 21% on native hardware). BTW, I'm on ESXi 6U1 and the H220 is officially supported by VMware and HP.
Is there any tuning to do at ESXi level to maximize the performance we can get from a SSD and a good HBA? I'm planning on installing the H220 in one of the ML310e that has a Xeon e3-1230v3 to pass the HBA to the OS and see the impact but I didn't wanted to dedicate a small Xeon for that task. I saw some QNAP and Synology that are able to nearly fill a 10 Gbps while using an Intel atom or celeron so there certainly something to do with a i3.
Any idea or comment? Don't hesitate to ask if you need more details. Thank you.
ehfortin