Hi All,
I’ll preface this with I’ve worked with large scale isilon, netapp, VMAX, etc solutions, mostly 5years + ago, and mostly focused about low latency high speed access + reliable access (supercompute storage + processing + 300
X 10G desktops accessing 40-80GB datasets + long term slower storage for archived projects)
But I’m a complete noob when it comes to freenas/truenas, likewise some of the recent tech I.e. nvme, pci based flash drives
I’m testing different configs at the moment, and wouldn’t mind some critiquing / suggestions as to what I can do better, and to assist with speed and expansion
That being said
* Dell R720 dual E5-2620 v0
* 128GB DDR3, ECC
* Onboard 2x10Gbps NDC
* Dell 12Gbps SAS HBA adapter - external, PCI3.0, installed in one of the x16 slots.
* Integrated H310 flashed to IT mode
* no name brand NVME to PCI convertor in x4 slot
* 500 GB Crucial P2 3D NAND NVME
* Dell MD1220 with 4 x Kingston 1TB SSD’s, 16 x 1.2TB 10k SAS former netapp drives (reformatted to 512 bytes)
4 x 3.5” 8TB WD 7200rpm sata drives - unconfigured at the moment
* TrueNas 12.x beta installed as a VM under ESXI 7.0
* both the sas HBA and the h310 are passed through to the VM (32GB, 12 vCPU), likewise the NVME card, however the system detects it just as a drive rather than the card
* have 1 x 10Gb port directly passed through to the VM, and 1 x 10G vNIC
IPerf shows 7.9Gbps with MTU of 1500, low 9’s for 9000.
Test PC is using a RAM drive rather than disk for the copying
TrueNAS is setup as follows
* 2 x vdev pools of 8 drives with z1
* L2ARC is using the Crucial NVME
* 2 x 1TB SSD in raid 0 for log drive
* 1 x 1TB SSD as the dedup drive (not sure this actually does anything )
My max read / write tends to top out at 450MB/s from a client on a diff VM, and from a VM on the same box.
I can’t help but think the performance should be a lot better? Especially for writes. It’s my understanding that it should be writing to the ARC, then L2ARC, and then finally to the spinning rust? The files in question are not exceeding the size of either cache.
The 8TB SATA drives haven’t been added yet, as I’m not sure how to integrate them?
* do I add just another vdev, and assign to the pool, or create a new pool for “slow storage”
* the 8TB drives are still somewhat in use in my current NAS, and not ready to move them over until I’m happy with the TrueNAS setup
* or can I add the SATA vdev to the main pool?
I suspect it will be the former
I realise that 400-450MB/s via CIFS on a single transfer isn’t bad, but I was expecting 600-700
The 400/450, would seem to match the 6Gbps max speed of the MD1220 enclosure
Specs for the Crucial P2 aren’t fantastic, but should be more than enough ?
Crucial P2 500GB 3D NAND NVMe PCIe M.2 SSD,CT500P2SSD8 Crucial P2 500GB 3D NAND NVMe PCIe M.2 SSD,CT500P2SSD8: Amazon.com.au: Computers & Accessories
PCI NVME card (I’m using the sata port as well for vmlogs)
In summary, what can I do to make this better ?
* should I ditch running it under VM? And go bare metal, hard choice, due to the system resources that would goto waste.
* should i replace the l2arc with something a bit more enterprise grade - I.e intel P/D series
* how can I make use of the slower, but larger capacity spinning disks
* what tweaks can I look at to get the transfer speeds up? Either TrueNAS or client OS
* or does the numbers my setup return seem expected?
* would a 12Gbps enclose make any difference with the cache already not being utilised? (Drives should be fine due to spindle count)
* am I missing something really basic? I.e CPU too underpowered to handle the parity calcs while handling the IO from .10Gbps NIC + HBA.
* does TrueNAS have the ability to report on IOPs? / what stats should I be looking at
* If I was to go the path of a Intel DC PCIE storage card, is it possible to partition it so it could be used for multiple pools?
I have tried Unraid, which was absolutely dismal - 120-150Mbps for read, 40-50Mbps for write.
Sorry I know this is a lot to read through on a Sunday, but any assistance from those who’ve more experience would be highly appreciated, and more than happy to buy a carton of beer for the help!
Btw this is not commercial, this is not for a company, this is purely a home lab setup.
I’ll preface this with I’ve worked with large scale isilon, netapp, VMAX, etc solutions, mostly 5years + ago, and mostly focused about low latency high speed access + reliable access (supercompute storage + processing + 300
X 10G desktops accessing 40-80GB datasets + long term slower storage for archived projects)
But I’m a complete noob when it comes to freenas/truenas, likewise some of the recent tech I.e. nvme, pci based flash drives
I’m testing different configs at the moment, and wouldn’t mind some critiquing / suggestions as to what I can do better, and to assist with speed and expansion
That being said
* Dell R720 dual E5-2620 v0
* 128GB DDR3, ECC
* Onboard 2x10Gbps NDC
* Dell 12Gbps SAS HBA adapter - external, PCI3.0, installed in one of the x16 slots.
* Integrated H310 flashed to IT mode
* no name brand NVME to PCI convertor in x4 slot
* 500 GB Crucial P2 3D NAND NVME
* Dell MD1220 with 4 x Kingston 1TB SSD’s, 16 x 1.2TB 10k SAS former netapp drives (reformatted to 512 bytes)
4 x 3.5” 8TB WD 7200rpm sata drives - unconfigured at the moment
* TrueNas 12.x beta installed as a VM under ESXI 7.0
* both the sas HBA and the h310 are passed through to the VM (32GB, 12 vCPU), likewise the NVME card, however the system detects it just as a drive rather than the card
* have 1 x 10Gb port directly passed through to the VM, and 1 x 10G vNIC
IPerf shows 7.9Gbps with MTU of 1500, low 9’s for 9000.
Test PC is using a RAM drive rather than disk for the copying
TrueNAS is setup as follows
* 2 x vdev pools of 8 drives with z1
* L2ARC is using the Crucial NVME
* 2 x 1TB SSD in raid 0 for log drive
* 1 x 1TB SSD as the dedup drive (not sure this actually does anything )
My max read / write tends to top out at 450MB/s from a client on a diff VM, and from a VM on the same box.
I can’t help but think the performance should be a lot better? Especially for writes. It’s my understanding that it should be writing to the ARC, then L2ARC, and then finally to the spinning rust? The files in question are not exceeding the size of either cache.
The 8TB SATA drives haven’t been added yet, as I’m not sure how to integrate them?
* do I add just another vdev, and assign to the pool, or create a new pool for “slow storage”
* the 8TB drives are still somewhat in use in my current NAS, and not ready to move them over until I’m happy with the TrueNAS setup
* or can I add the SATA vdev to the main pool?
I suspect it will be the former
I realise that 400-450MB/s via CIFS on a single transfer isn’t bad, but I was expecting 600-700
The 400/450, would seem to match the 6Gbps max speed of the MD1220 enclosure
Specs for the Crucial P2 aren’t fantastic, but should be more than enough ?
Crucial P2 500GB 3D NAND NVMe PCIe M.2 SSD,CT500P2SSD8 Crucial P2 500GB 3D NAND NVMe PCIe M.2 SSD,CT500P2SSD8: Amazon.com.au: Computers & Accessories
PCI NVME card (I’m using the sata port as well for vmlogs)
GLOTRENDS Dual M.2 X4 PCIe Adapter for One M.2 PCIe SSD and One M.2 SATA SSD, Support OS Booting from M.2 PCIe SSD or M.2 SATA SSD, Including 2 x M.2 Heatsink and Thermal Pad and M.2 Screws (PA12-HS) : Amazon.com.au: Computers
GLOTRENDS Dual M.2 X4 PCIe Adapter for One M.2 PCIe SSD and One M.2 SATA SSD, Support OS Booting from M.2 PCIe SSD or M.2 SATA SSD, Including 2 x M.2 Heatsink and Thermal Pad and M.2 Screws (PA12-HS) : Amazon.com.au: Computers
www.amazon.com.au
In summary, what can I do to make this better ?
* should I ditch running it under VM? And go bare metal, hard choice, due to the system resources that would goto waste.
* should i replace the l2arc with something a bit more enterprise grade - I.e intel P/D series
* how can I make use of the slower, but larger capacity spinning disks
* what tweaks can I look at to get the transfer speeds up? Either TrueNAS or client OS
* or does the numbers my setup return seem expected?
* would a 12Gbps enclose make any difference with the cache already not being utilised? (Drives should be fine due to spindle count)
* am I missing something really basic? I.e CPU too underpowered to handle the parity calcs while handling the IO from .10Gbps NIC + HBA.
* does TrueNAS have the ability to report on IOPs? / what stats should I be looking at
* If I was to go the path of a Intel DC PCIE storage card, is it possible to partition it so it could be used for multiple pools?
I have tried Unraid, which was absolutely dismal - 120-150Mbps for read, 40-50Mbps for write.
Sorry I know this is a lot to read through on a Sunday, but any assistance from those who’ve more experience would be highly appreciated, and more than happy to buy a carton of beer for the help!
Btw this is not commercial, this is not for a company, this is purely a home lab setup.