suggestions for all nvme storage server.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

crashpb

New Member
Jun 9, 2022
7
0
1
so, I want to build a truenas core server as storage backend to store all my VMs data, right now I'm planning for 30 VMs with most doing light browsing stuff and some doing heavier tasks like photoshop and gaming(say 70-30 split) and some database VMs in the future(maybe) so lots of I/o performance is required.(I even have plan to expand in the future).

I want truenas and zfs mostly for data protection and because I have used them in the past (I have used both linux and bsd versions of truenas, right now I prefer to use BSD version).

I was planning for a nvme pool but looking around it seems with zfs you cant get much I/O performance out of an nvme array.

So what approach would be the right call?

sata ssd array with special metadata device?

HDD array with a large cache?

I'm not gonna need too big of a raw capacity requirement for now (10TB of fast storage would be enough I would say BUT If can get 20-25TB that would be Ideal).

one more thing that I'm not sure on is that for let's say a VM storage pool what would be the best approach between these two:
1.give a massive iscsi device to the hypervisor and let it decide what to do (most likely goona end up with qcow2 for snapshotting however I'm planning on gpu passthrough which I don't think qcow2 likes very much )

2.make a per machine zvol under a dataset and share that via iscsi to hypervisor to be used for each separate vm and use zfs snapshotting features.

also Is deduplication recommended for a setup like this?
 

crashpb

New Member
Jun 9, 2022
7
0
1
I don't think that ZFS is less suitable for NVMe than SATA ssds. Where did you read such a thing?

Why do you want to use iSCSI?
surfing around I see people mentioning you're not gonna get the I/O performance of your nvme drives under zfs.

Why do you want to use iSCSI?
so a zvol appears as a disk for the vm ?

do note that I'm not using truenas as the hypervisor just a nas (san you could say).
 

ano

Well-Known Member
Nov 7, 2022
634
259
63
so, I want to build a truenas core server as storage backend to store all my VMs data, right now I'm planning for 30 VMs with most doing light browsing stuff and some doing heavier tasks like photoshop and gaming(say 70-30 split) and some database VMs in the future(maybe) so lots of I/o performance is required.(I even have plan to expand in the future).

I want truenas and zfs mostly for data protection and because I have used them in the past (I have used both linux and bsd versions of truenas, right now I prefer to use BSD version).

I was planning for a nvme pool but looking around it seems with zfs you cant get much I/O performance out of an nvme array.

So what approach would be the right call?

sata ssd array with special metadata device?

HDD array with a large cache?

I'm not gonna need too big of a raw capacity requirement for now (10TB of fast storage would be enough I would say BUT If can get 20-25TB that would be Ideal).

one more thing that I'm not sure on is that for let's say a VM storage pool what would be the best approach between these two:
1.give a massive iscsi device to the hypervisor and let it decide what to do (most likely goona end up with qcow2 for snapshotting however I'm planning on gpu passthrough which I don't think qcow2 likes very much )

2.make a per machine zvol under a dataset and share that via iscsi to hypervisor to be used for each separate vm and use zfs snapshotting features.

also Is deduplication recommended for a setup like this?
I'm able to get 18.7GiB 128k 100% random writes with nvme and ZFS, so you can max 2x100gbps, with amd gen3, with genoa Im hopeing to get more iops
 
  • Like
Reactions: T_Minus

crashpb

New Member
Jun 9, 2022
7
0
1
I'm able to get 18.7GiB 128k 100% random writes with nvme and ZFS, so you can max 2x100gbps, with amd gen3, with genoa Im hopeing to get more iops
nice.
have you done any special configurations or tweaks?
or just a raidzX pool setup without any special configs?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
We run all NVME in ZFS as well as all OPTANE in ZFS too. All NVME has Optane SLOG, all Optane also has Optane Slog, it matters too.

ZFS is not for performance it's for data security\safety\etc... so to make it perform better use NVME and Optane.
Is it a performant file system as if you were running them off a NVME RAID HBA... no.
 

ano

Well-Known Member
Nov 7, 2022
634
259
63
nice.
have you done any special configurations or tweaks?
or just a raidzX pool setup without any special configs?
some, usually key is just 2.1.6 and above, and lots of cpu/fast ram.

have run all optane P4800x arrays, and P4800x for other nvme etc, didnt find any real gains with optane slog for fast nvme (fast QD=1), ref @T_Minus
 
  • Like
Reactions: gb00s

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
some, usually key is just 2.1.6 and above, and lots of cpu/fast ram.

have run all optane P4800x arrays, and P4800x for other nvme etc, didnt find any real gains with optane slog for fast nvme (fast QD=1), ref @T_Minus
The results were not OMG much better like SLOG vs none when running SSD or HDD, but it was an improvement during our tests.
The price of 1 optane to increase performance was such a small % of these builds we left them in-use ;) and no complaints.
 
  • Like
Reactions: itronin

ano

Well-Known Member
Nov 7, 2022
634
259
63
can I ask what drives and what optane? we found queueing kinda negated any real gain
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
can I ask what drives and what optane? we found queueing kinda negated any real gain
Most of our capacity NVME are 2TB Intel P3500 and P4500 and their variants. We also have some of the fast Samsung NVME (~4TB each IIRC) much faster than these Intels, and they too benefited slightly from the Optane SLOG. All Optane Pool are 900p and 905p and for SLOG we use those too as well as P4800x. (pool of mirrors) I have a P5800x here somewhere I was going to test with but doubt we'd see much benefit in a pcie3 system if any... and now that I'm writing this I just may put that in the new AMD 7900x system :eek: that's gonna feel snappy!

We didn't build these to be the most fast performing systems possible it was more of a fastest performance within a budget on ZFS (lol) for general VM usage, so some may run windows, linux, mysql, etc... for the best performance for our purposes it's high freq CPU + optane + linux baremetal, no hypervisor or zfs... not really a surprise ;) but also not practical for all applications.

When we built these systems NVME drives were cheaper than SATA SSD and just now in 2023 it looks like 2TB NVME Enterprise is similar price to 2TB SATA Enterprise SSD (used market).


For the purpose of this build I'd go for the cheapest, most life left Intel Enterprise NVME and get as many as you need for your capacity and pool config needs. For example Intel P4510 8TB $550. You get 3 of them and you're over your 20TB then depending on pool config you could go with 4-6 and still be at 24GB RAW, if it was me I would go with 6 for capacity/redundancy/performance.
 
  • Like
Reactions: crashpb and itronin