so, I want to build a truenas core server as storage backend to store all my VMs data, right now I'm planning for 30 VMs with most doing light browsing stuff and some doing heavier tasks like photoshop and gaming(say 70-30 split) and some database VMs in the future(maybe) so lots of I/o performance is required.(I even have plan to expand in the future).
I want truenas and zfs mostly for data protection and because I have used them in the past (I have used both linux and bsd versions of truenas, right now I prefer to use BSD version).
I was planning for a nvme pool but looking around it seems with zfs you cant get much I/O performance out of an nvme array.
So what approach would be the right call?
sata ssd array with special metadata device?
HDD array with a large cache?
I'm not gonna need too big of a raw capacity requirement for now (10TB of fast storage would be enough I would say BUT If can get 20-25TB that would be Ideal).
one more thing that I'm not sure on is that for let's say a VM storage pool what would be the best approach between these two:
1.give a massive iscsi device to the hypervisor and let it decide what to do (most likely goona end up with qcow2 for snapshotting however I'm planning on gpu passthrough which I don't think qcow2 likes very much )
2.make a per machine zvol under a dataset and share that via iscsi to hypervisor to be used for each separate vm and use zfs snapshotting features.
also Is deduplication recommended for a setup like this?
I want truenas and zfs mostly for data protection and because I have used them in the past (I have used both linux and bsd versions of truenas, right now I prefer to use BSD version).
I was planning for a nvme pool but looking around it seems with zfs you cant get much I/O performance out of an nvme array.
So what approach would be the right call?
sata ssd array with special metadata device?
HDD array with a large cache?
I'm not gonna need too big of a raw capacity requirement for now (10TB of fast storage would be enough I would say BUT If can get 20-25TB that would be Ideal).
one more thing that I'm not sure on is that for let's say a VM storage pool what would be the best approach between these two:
1.give a massive iscsi device to the hypervisor and let it decide what to do (most likely goona end up with qcow2 for snapshotting however I'm planning on gpu passthrough which I don't think qcow2 likes very much )
2.make a per machine zvol under a dataset and share that via iscsi to hypervisor to be used for each separate vm and use zfs snapshotting features.
also Is deduplication recommended for a setup like this?