Hi,
I'm trying to set up a new shared storage server to replace our old qnap TS-463-RP.
Core switch is Ubiquiti ES-16XG
It's running with 16GB of ram and two Samsung MZ7KH3T8HALS-00005 SSDs in RAID1.
Connection with copper (quality cable, short run in the same rack).
I find the performance lacking for our esxi-host with a dozen or so vm's. Mostly linux, some are windows.
I got an DL360 gen9 for free, installed latest Truenas scale and upgraded it to the following specs;
- 1x E5-2640v4 cpu (10c/20t)
- 8x 32GB ram for a total of 256GB
- Emulex OneConnect OCe14102B-NT 10GbaseT, again quality cable, short run in the same rack. 1x 10G uplink
- Storage:
-- P440ar in HBA-mode
-- OS: 2x cheapo SSD
-- sas-pool to test: 2x ST900MM0006 10k rpm SAS-drives (mirror)
-- ssd-pool to test: 1x Samsung 850 EVO 500GB
test were run with dd over NFS. The VM is running on the ESXI that has dual 10G uplinks to the core switch.
time dd if=/dev/zero of=/mnt/tn-stor-sas/testfile bs=16k count=256k
time dd if=/mnt/tn-stor-ssd/testfile of=/dev/null bs=16k
Both give about ~300MBps read. The qnap has a bit slower writes (250MBps) against 370-380MBps for either truenas with ssd or sas -targets.
I find the reads (at least for the SAS-mirror) lacking. Should be more? Is this a truenas-thing or should I look for something in the network?
iperf3 gives about 8.8Gbps between vm and truenas.
There's also a second truenas-box (Dell T3620 IIRC) with a much lower spec CPU and less ram. Same Samsung MZ7KH in mirror and it gives same performance as well.
Should I be getting some nvme instead? I kinda was expecting to saturate the sata-ssd bandwith to somewhere around 500MBps
I'm trying to set up a new shared storage server to replace our old qnap TS-463-RP.
Core switch is Ubiquiti ES-16XG
It's running with 16GB of ram and two Samsung MZ7KH3T8HALS-00005 SSDs in RAID1.
Connection with copper (quality cable, short run in the same rack).
I find the performance lacking for our esxi-host with a dozen or so vm's. Mostly linux, some are windows.
I got an DL360 gen9 for free, installed latest Truenas scale and upgraded it to the following specs;
- 1x E5-2640v4 cpu (10c/20t)
- 8x 32GB ram for a total of 256GB
- Emulex OneConnect OCe14102B-NT 10GbaseT, again quality cable, short run in the same rack. 1x 10G uplink
- Storage:
-- P440ar in HBA-mode
-- OS: 2x cheapo SSD
-- sas-pool to test: 2x ST900MM0006 10k rpm SAS-drives (mirror)
-- ssd-pool to test: 1x Samsung 850 EVO 500GB
test were run with dd over NFS. The VM is running on the ESXI that has dual 10G uplinks to the core switch.
time dd if=/dev/zero of=/mnt/tn-stor-sas/testfile bs=16k count=256k
time dd if=/mnt/tn-stor-ssd/testfile of=/dev/null bs=16k
Both give about ~300MBps read. The qnap has a bit slower writes (250MBps) against 370-380MBps for either truenas with ssd or sas -targets.
I find the reads (at least for the SAS-mirror) lacking. Should be more? Is this a truenas-thing or should I look for something in the network?
iperf3 gives about 8.8Gbps between vm and truenas.
There's also a second truenas-box (Dell T3620 IIRC) with a much lower spec CPU and less ram. Same Samsung MZ7KH in mirror and it gives same performance as well.
Should I be getting some nvme instead? I kinda was expecting to saturate the sata-ssd bandwith to somewhere around 500MBps