Hi,
so I am planning to upgrade my homelab to 10GBe and want to move my VM storage from my ESXi host to my TrueNAS Scale host. I am planning to make a new pool comprised of older Enterprise Sata SSDs, since I want the VMs to be as responsive as possible and saturate the 10G link. I will also add a cache SSD to both the new pool and my HDD pool, which will either be Intel Optane 900P or an HP ioDrive. For the SSDs, I am looking at refurbished Intel DC S3520 1.6TB SSDs, since I can get them for around 75€ per drive on ebay.
1. The IOPS for the SSD are quite a bit lower compared to the ones on consumer SSDs (67.5k/17k vs 98k/90k on 860 EVO). Is that normal for Enterprise SSDs and is the going to be a problem?
2. RAIDZ(2) vs Striped Mirrors: Looking through the forums, people recommend striped mirrors for performance (i.e. VMs) and raidz(2) for bulk storage. I would however loose a lot more usable storage by going with striped mirrors, which means higher cost per GB. How much would adding the Intel Optane 900P as a cache drive offset the performance loss of RAIDZ? Is the difference going to be that noticable over a 10G link?
3. I am also considering just going for single drive vdevs, to get the maximum amount of storage possible out of the SSDs, as I dont care about the VM disks themselves. All my important data is stored on my NAS via NFS and I can rebuild them easily with Packer and Terraform.
I am mainly going for SSDs instead of HDDs for the obvious performance benefits, and also the fact that I already have a SuperMicro CSE216 case I can through them into. I'm probably not going to go for HDDs, but from your experience, how much of a difference do they make in the real world (responsiveness, boot time) compared to SSDs?
Apart from the DC 3520 SSDs, are there any other recommended drives to look for?
I appreciate any help I can get here, since I am still quite a beginner when it comes to storage.
so I am planning to upgrade my homelab to 10GBe and want to move my VM storage from my ESXi host to my TrueNAS Scale host. I am planning to make a new pool comprised of older Enterprise Sata SSDs, since I want the VMs to be as responsive as possible and saturate the 10G link. I will also add a cache SSD to both the new pool and my HDD pool, which will either be Intel Optane 900P or an HP ioDrive. For the SSDs, I am looking at refurbished Intel DC S3520 1.6TB SSDs, since I can get them for around 75€ per drive on ebay.
1. The IOPS for the SSD are quite a bit lower compared to the ones on consumer SSDs (67.5k/17k vs 98k/90k on 860 EVO). Is that normal for Enterprise SSDs and is the going to be a problem?
2. RAIDZ(2) vs Striped Mirrors: Looking through the forums, people recommend striped mirrors for performance (i.e. VMs) and raidz(2) for bulk storage. I would however loose a lot more usable storage by going with striped mirrors, which means higher cost per GB. How much would adding the Intel Optane 900P as a cache drive offset the performance loss of RAIDZ? Is the difference going to be that noticable over a 10G link?
3. I am also considering just going for single drive vdevs, to get the maximum amount of storage possible out of the SSDs, as I dont care about the VM disks themselves. All my important data is stored on my NAS via NFS and I can rebuild them easily with Packer and Terraform.
I am mainly going for SSDs instead of HDDs for the obvious performance benefits, and also the fact that I already have a SuperMicro CSE216 case I can through them into. I'm probably not going to go for HDDs, but from your experience, how much of a difference do they make in the real world (responsiveness, boot time) compared to SSDs?
Apart from the DC 3520 SSDs, are there any other recommended drives to look for?
I appreciate any help I can get here, since I am still quite a beginner when it comes to storage.