Hello~
I'm about to make the shift from a Windows Server 2016 with Storage Spaces for CIFS/iSCSI over to a Linux with ZFS , to hopefully get better disk & network performance out of my hardware. Any advice on how best to setup, or anything I should consider would be greatly appreciated.
NAS Server Hardware
SuperMicro 36x Bay SuperChassis w/ Dual Intel X5560 QC @ 2.8GHz (8 cores total)
My biggest issue/bottleneck is coming up with a design that has support for my older generation Infiniband nics & switch. i.e. Mellanox ConnectX-2 40GB QDR Dual Port Nics and Mellanox 4036-E 36x Port 40GB Managed Switch. All of my ESX servers and my main Windows desktop have these dual port NICs, as well as the NAS server itself. These require IPoIB drivers, which are shotty at best. If I were dealing with 1GbE or even 10GbE I wouldn't be terribly worried about squeezing every bit of performance out of the 4x SSDs. But I've got a pair of 40Gb Nics between all my hosts, i'd like to have the disk try and push it.
In my current setup, the Windows 2016 server is performing well enough... It has excellent Infiniband support, and I get SMB 3.1 with RDMA, so from my primary desktop i'm seeing 1.2GBs sustained read/write. However when trying NFS support to my ESX farm, it was worse than 10MBs. I switched over to iSCSI between the NAS and the ESX environment and achieved good perf (600MBs/ish). But after doubling my disk and SSDs in the pool, i saw no additional performance which was disappointing.
So my question is... for OS with best-of-breed ZFS support, do I have to focus on Solaris only? or is something like Ubuntu 16.10 (which apparently has Mellanox supported drivers) a good place to start.
And when it comes to ZFS design itself, how should I best use my hardware?
I want a single Pool, expandable with "like quantities" of hard drives for scalable capacity & performance.
i.e. I'll likely add disks in 5x 4TB HDDs and 2x 250GB SSDs increments.
I'll be starting with 10x 4tB HDDs and 4x 250GB SSds.
I'm thinking of having 1 disk per 5 for redundancy, so i'm thinking raid 50 2(4+1) or raid6 1(8+2).
For SSDs, i'm curious if i can/should partition each of the SSDs.
And use a certain portion of them for ZIL , L2ARC, and w/e the other caching mechanism i'm forgetting is.
I'm about to make the shift from a Windows Server 2016 with Storage Spaces for CIFS/iSCSI over to a Linux with ZFS , to hopefully get better disk & network performance out of my hardware. Any advice on how best to setup, or anything I should consider would be greatly appreciated.
NAS Server Hardware
SuperMicro 36x Bay SuperChassis w/ Dual Intel X5560 QC @ 2.8GHz (8 cores total)
80GB DDR3 ECC
7x PCIe v2.0 x8 (in x16 Slots)
5x LSI 9211-8i's in IT Mode (latest FW)
1x Crucial 250GB SSD (Boot Drive)
4x Samsung 250GB Evo 850 Series (Cache only - dont need capacity, need speed)
10x Hitachi 4TB NAS Edition 7.2K 64MB Cache (Pool Storage)
All disks are spread evenly across the SATA3 controllers
- 2x SSDs per 1x LSI (on separate channels)
- 10x HDDs split up on 2x LSI
- 1x Boot drive on 1x LSI
7x PCIe v2.0 x8 (in x16 Slots)
5x LSI 9211-8i's in IT Mode (latest FW)
1x Crucial 250GB SSD (Boot Drive)
4x Samsung 250GB Evo 850 Series (Cache only - dont need capacity, need speed)
10x Hitachi 4TB NAS Edition 7.2K 64MB Cache (Pool Storage)
All disks are spread evenly across the SATA3 controllers
- 2x SSDs per 1x LSI (on separate channels)
- 10x HDDs split up on 2x LSI
- 1x Boot drive on 1x LSI
My biggest issue/bottleneck is coming up with a design that has support for my older generation Infiniband nics & switch. i.e. Mellanox ConnectX-2 40GB QDR Dual Port Nics and Mellanox 4036-E 36x Port 40GB Managed Switch. All of my ESX servers and my main Windows desktop have these dual port NICs, as well as the NAS server itself. These require IPoIB drivers, which are shotty at best. If I were dealing with 1GbE or even 10GbE I wouldn't be terribly worried about squeezing every bit of performance out of the 4x SSDs. But I've got a pair of 40Gb Nics between all my hosts, i'd like to have the disk try and push it.
In my current setup, the Windows 2016 server is performing well enough... It has excellent Infiniband support, and I get SMB 3.1 with RDMA, so from my primary desktop i'm seeing 1.2GBs sustained read/write. However when trying NFS support to my ESX farm, it was worse than 10MBs. I switched over to iSCSI between the NAS and the ESX environment and achieved good perf (600MBs/ish). But after doubling my disk and SSDs in the pool, i saw no additional performance which was disappointing.
So my question is... for OS with best-of-breed ZFS support, do I have to focus on Solaris only? or is something like Ubuntu 16.10 (which apparently has Mellanox supported drivers) a good place to start.
And when it comes to ZFS design itself, how should I best use my hardware?
I want a single Pool, expandable with "like quantities" of hard drives for scalable capacity & performance.
i.e. I'll likely add disks in 5x 4TB HDDs and 2x 250GB SSDs increments.
I'll be starting with 10x 4tB HDDs and 4x 250GB SSds.
I'm thinking of having 1 disk per 5 for redundancy, so i'm thinking raid 50 2(4+1) or raid6 1(8+2).
For SSDs, i'm curious if i can/should partition each of the SSDs.
And use a certain portion of them for ZIL , L2ARC, and w/e the other caching mechanism i'm forgetting is.