Help with selecting SSDs and pool layout for VM ZFS pool

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

evacuate_custody

New Member
Apr 21, 2023
6
0
1
Hi,

so I am planning to upgrade my homelab to 10GBe and want to move my VM storage from my ESXi host to my TrueNAS Scale host. I am planning to make a new pool comprised of older Enterprise Sata SSDs, since I want the VMs to be as responsive as possible and saturate the 10G link. I will also add a cache SSD to both the new pool and my HDD pool, which will either be Intel Optane 900P or an HP ioDrive. For the SSDs, I am looking at refurbished Intel DC S3520 1.6TB SSDs, since I can get them for around 75€ per drive on ebay.

1. The IOPS for the SSD are quite a bit lower compared to the ones on consumer SSDs (67.5k/17k vs 98k/90k on 860 EVO). Is that normal for Enterprise SSDs and is the going to be a problem?
2. RAIDZ(2) vs Striped Mirrors: Looking through the forums, people recommend striped mirrors for performance (i.e. VMs) and raidz(2) for bulk storage. I would however loose a lot more usable storage by going with striped mirrors, which means higher cost per GB. How much would adding the Intel Optane 900P as a cache drive offset the performance loss of RAIDZ? Is the difference going to be that noticable over a 10G link?
3. I am also considering just going for single drive vdevs, to get the maximum amount of storage possible out of the SSDs, as I dont care about the VM disks themselves. All my important data is stored on my NAS via NFS and I can rebuild them easily with Packer and Terraform.

I am mainly going for SSDs instead of HDDs for the obvious performance benefits, and also the fact that I already have a SuperMicro CSE216 case I can through them into. I'm probably not going to go for HDDs, but from your experience, how much of a difference do they make in the real world (responsiveness, boot time) compared to SSDs?

Apart from the DC 3520 SSDs, are there any other recommended drives to look for?

I appreciate any help I can get here, since I am still quite a beginner when it comes to storage.
 

acquacow

Well-Known Member
Feb 15, 2017
787
439
63
42
I have stacks of the 1.6TB Intel S3500/3520 in my environment, but I wouldn't setup NFS in FreeNAS to back ESXi due to all the sync writes and reliance on a SLOG for performance.

Grab a spare piece of hardware and throw proxmox on it with a lot of DRAM and migrate the VMs out of ESX into proxmox. Life will be much simpler then.

With 8x 1.6TB in proxmox in raidz2, I'm seeing over 3GB/sec reads and 1GB/sec writes in my VMs.


-- Dave
 

evacuate_custody

New Member
Apr 21, 2023
6
0
1
I have stacks of the 1.6TB Intel S3500/3520 in my environment, but I wouldn't setup NFS in FreeNAS to back ESXi due to all the sync writes and reliance on a SLOG for performance.

Grab a spare piece of hardware and throw proxmox on it with a lot of DRAM and migrate the VMs out of ESX into proxmox. Life will be much simpler then.

With 8x 1.6TB in proxmox in raidz2, I'm seeing over 3GB/sec reads and 1GB/sec writes in my VMs.


-- Dave
I have tried Proxmox before, and it's ok, but integration with tools like Ansible and Terraform is just way worse comapred to vSphere, so I won't be switching.

Would creating the datastore over iSCSI solve the problem of the syncs? Read on another post somewhere that one should use NFS for file and iSCSI for block storage. I have never used iSCSI before so I don't know.
 

mrpasc

Well-Known Member
Jan 8, 2022
504
264
63
Munich, Germany
1. The IOPS for the SSD are quite a bit lower compared to the ones on consumer SSDs (67.5k/17k vs 98k/90k on 860 EVO). Is that normal for Enterprise SSDs and is the going to be a problem?
This is normal and the Enterprise ones will definitely work better.
For consumer SSDs most if not all manufacturers provide „up to“ performance figures, means measuring with empty drives and workloads that fit into DRAM or pseudo-SLC cache. They never will reach that figures under heavy load. It’s a marketing thing..
For datacenter or enterprise SSDs the performance figures are to read as „at least“ figures under real world workloads. They can do better for light workloads and do their performance under sustained load.
 

acquacow

Well-Known Member
Feb 15, 2017
787
439
63
42
I have tried Proxmox before, and it's ok, but integration with tools like Ansible and Terraform is just way worse comapred to vSphere, so I won't be switching.

Would creating the datastore over iSCSI solve the problem of the syncs? Read on another post somewhere that one should use NFS for file and iSCSI for block storage. I have never used iSCSI before so I don't know.
It's just KVM under the covers, Ansible and Terraform work just fine with Proxmox...