First of, thank you all for all your contributions here on the forum. I'm not an active writing user, but I often read here as I'm really into various computer and network stuff.
I have the last couple of years run a AiO, with ESXi with 4-5 VMs, one of them with OmniOS with Nappit for ZFS, aswell as a few other machines(not heavy loads). I often download for temporary storage on a the sas drives through a virtual win2019 machine, and then transfer them to the raidz2 array. Hardware:
- Asrock D1541D4U-2T8R (D-1541, SAS3008, X550 10GB onboard)
- 512gb SATA for ESXI and virtual machines, mounted directly in esxi
- X-Case RM424 Pro-EX V2 24 Bay (SAS expander 2x4 port, 12GB\s (assume LSI chipset expander)
- 128GB Ram ecc 2400mhz. 80GB given to OmniOS/ZFS. All CPU cores passed through to omnios
- HBA Passthrough 3x 900GB HUSMM1680ASS (stripe) and RaidZ2 for 11x Seagate ST8000VN0022
- ESXi 6.7U2, OmniOS v11 r151032p, Nappit 19.12a1, Win2019 DC
However, I've never been really satisfied with the ZFS speed performance. Running local mc file transfer locally on OmniOS from SAS array to the RaidZ2 array gives 500-700mb/s(large 20gb file transfer). Looks like the CPU is getting close to max in ESXi overview. Mounting the disks over samba to virtual Win2019 machine, vmxnet3 driver. Gives around 220-240mb/s, from the sas array to the raidz2 array.
Now for my sanity check. I consider to upgrade the hardware, keep the disks, chassi and ram, and replace the aging D-1541 motherboard with this:
- X570 motherboard (Consider Asus X570-ACE or a Gigabyte MB, to get more PCIe\NVMe ports)
- Amd Ryzen 3700x (seems to be the sweetspot between price/performance)
- Add a HBA, SAS3008 (buy one of the 9300-8I from ebay)
Additional considers:
- Add a NVME PCIe 4.0 2TB (like Cosair MP600) for VM's and possible to replace the need for the SAS array
- Maybe add graphics card that can be passed through to the Win2019, and get cals for 3-4 remote uses. Recommendations?
- Not install a 10gb NIC for now, as most transfers are local on the host
Other:
Might wait until Samsung releases their 980 NVME PCIe 4.0 instead of the Cosair MP600. Uncertain if this should be mounted in omnios and then shared over network to esxi for vms, or just directly attatched to esxi host itself?
Whats your guys thoughts?
I have the last couple of years run a AiO, with ESXi with 4-5 VMs, one of them with OmniOS with Nappit for ZFS, aswell as a few other machines(not heavy loads). I often download for temporary storage on a the sas drives through a virtual win2019 machine, and then transfer them to the raidz2 array. Hardware:
- Asrock D1541D4U-2T8R (D-1541, SAS3008, X550 10GB onboard)
- 512gb SATA for ESXI and virtual machines, mounted directly in esxi
- X-Case RM424 Pro-EX V2 24 Bay (SAS expander 2x4 port, 12GB\s (assume LSI chipset expander)
- 128GB Ram ecc 2400mhz. 80GB given to OmniOS/ZFS. All CPU cores passed through to omnios
- HBA Passthrough 3x 900GB HUSMM1680ASS (stripe) and RaidZ2 for 11x Seagate ST8000VN0022
- ESXi 6.7U2, OmniOS v11 r151032p, Nappit 19.12a1, Win2019 DC
However, I've never been really satisfied with the ZFS speed performance. Running local mc file transfer locally on OmniOS from SAS array to the RaidZ2 array gives 500-700mb/s(large 20gb file transfer). Looks like the CPU is getting close to max in ESXi overview. Mounting the disks over samba to virtual Win2019 machine, vmxnet3 driver. Gives around 220-240mb/s, from the sas array to the raidz2 array.
Now for my sanity check. I consider to upgrade the hardware, keep the disks, chassi and ram, and replace the aging D-1541 motherboard with this:
- X570 motherboard (Consider Asus X570-ACE or a Gigabyte MB, to get more PCIe\NVMe ports)
- Amd Ryzen 3700x (seems to be the sweetspot between price/performance)
- Add a HBA, SAS3008 (buy one of the 9300-8I from ebay)
Additional considers:
- Add a NVME PCIe 4.0 2TB (like Cosair MP600) for VM's and possible to replace the need for the SAS array
- Maybe add graphics card that can be passed through to the Win2019, and get cals for 3-4 remote uses. Recommendations?
- Not install a 10gb NIC for now, as most transfers are local on the host
Other:
Might wait until Samsung releases their 980 NVME PCIe 4.0 instead of the Cosair MP600. Uncertain if this should be mounted in omnios and then shared over network to esxi for vms, or just directly attatched to esxi host itself?
Whats your guys thoughts?
Last edited: