I am currently looking around and trying to spec up a new solaris/ZFS machine to act as a fiber channel storage server for running the backend storage to ~16 virtual hosts, as this is covering multiple aspects (the main one being disk related) i felt this was the most appropriate place to put it, if that is not the case feel free to move it
The current setup which will be moved to a lower load application consists of the following (which has been running 8 hosts)
Dual Xeon e5530 cpus
40gb ram
Dual LSI 9200-8e HBA cards in IT mode
Intel (LSI) 8 port raid card running JBOD for the SSD/ram disks
Dual Dell MD1000 15 drive enclosures/expanders
2 port acard sata ram disk for ZIL populated with 8x 2gb sticks
4x intel 530 series 240gb SSD as L2ARC
30x WD red 2tb drives for bulk storage (configured in 15x mirrors)
This system has been fine for a while however it is starting to show its lmits when the virtual machines start getting IO heavy.
My current thinking is along the following lines and i would appreciate any input on ways to better optimise the hardware side of things when i am looking to replace it in a few months time.
Supermicro x10SRL-F motherboard
Intel Xeon E5-1620v3 CPU
128gb ram (8x 8gb ECC)
8x samsung 850 Evo 1tb SSD for L2ARC
42x (at a minimum) 2tb drives, i would like to go with more but ultimately it comes down to the case/cases that end up being used
1x ZIL - unsure what to go with here
LSI HBA cards as required for internal/external connectivity as determined by the case choices
For the ZIL, i was thinking the samsung SV843 as it appears to have reasonably low latency for a standard 2.5"ssd and is high endurance, but depending on case choice a 2nd hand dell (marvel) 8gb WAM, or 365gb iodrive2 are at a similar price point and offer lower latency - are there any similar price point options that would give better performance?
For the case, i have not settled on which cases/jbod enclosures to use as yet but am liking the supermicro 45 drive one ( http://www.supermicro.com.tw/products/chassis/4U/847/SC847E1C-R1K28JBOD.cfm ) in combination with a 2u 24x 2.5" case to hold the rest of the server components and ZIL/Cache disks (this has the added benefit of allowing additional cache disks to be installed later on), however someone has put me onto the chenbro 48/60 drive and modular 36 drive 4U chassis and i am enquiring about pricing for those to see how they stack up compared to the supermicro offerings ( Chenbro - Products ) - if you know of any other high density options i would be interested in checking them out
Disk wise, for high IO i know pure SSD would be best, however with the data storage capacity i require this is impractically expensive, at this stage i cant justify $20,000+ (AU) just on disks to leave little room for expansion, this leaves me having to rely on throwing a large number of spinning disks at it and large amounts of cache to free them up for writes?
Should i continue with the WD Red drives? they have given me zero issues in the current system and i would happily use them again unless there is something better to use
Has anyone else on here done a similarly sized setup before?
How would you go about setting up the vdevs? mirrors? 6-8 disk z2s?
Would you change any part of the rest of the setup or would you leave it as is?
The current setup which will be moved to a lower load application consists of the following (which has been running 8 hosts)
Dual Xeon e5530 cpus
40gb ram
Dual LSI 9200-8e HBA cards in IT mode
Intel (LSI) 8 port raid card running JBOD for the SSD/ram disks
Dual Dell MD1000 15 drive enclosures/expanders
2 port acard sata ram disk for ZIL populated with 8x 2gb sticks
4x intel 530 series 240gb SSD as L2ARC
30x WD red 2tb drives for bulk storage (configured in 15x mirrors)
This system has been fine for a while however it is starting to show its lmits when the virtual machines start getting IO heavy.
My current thinking is along the following lines and i would appreciate any input on ways to better optimise the hardware side of things when i am looking to replace it in a few months time.
Supermicro x10SRL-F motherboard
Intel Xeon E5-1620v3 CPU
128gb ram (8x 8gb ECC)
8x samsung 850 Evo 1tb SSD for L2ARC
42x (at a minimum) 2tb drives, i would like to go with more but ultimately it comes down to the case/cases that end up being used
1x ZIL - unsure what to go with here
LSI HBA cards as required for internal/external connectivity as determined by the case choices
For the ZIL, i was thinking the samsung SV843 as it appears to have reasonably low latency for a standard 2.5"ssd and is high endurance, but depending on case choice a 2nd hand dell (marvel) 8gb WAM, or 365gb iodrive2 are at a similar price point and offer lower latency - are there any similar price point options that would give better performance?
For the case, i have not settled on which cases/jbod enclosures to use as yet but am liking the supermicro 45 drive one ( http://www.supermicro.com.tw/products/chassis/4U/847/SC847E1C-R1K28JBOD.cfm ) in combination with a 2u 24x 2.5" case to hold the rest of the server components and ZIL/Cache disks (this has the added benefit of allowing additional cache disks to be installed later on), however someone has put me onto the chenbro 48/60 drive and modular 36 drive 4U chassis and i am enquiring about pricing for those to see how they stack up compared to the supermicro offerings ( Chenbro - Products ) - if you know of any other high density options i would be interested in checking them out
Disk wise, for high IO i know pure SSD would be best, however with the data storage capacity i require this is impractically expensive, at this stage i cant justify $20,000+ (AU) just on disks to leave little room for expansion, this leaves me having to rely on throwing a large number of spinning disks at it and large amounts of cache to free them up for writes?
Should i continue with the WD Red drives? they have given me zero issues in the current system and i would happily use them again unless there is something better to use
Has anyone else on here done a similarly sized setup before?
How would you go about setting up the vdevs? mirrors? 6-8 disk z2s?
Would you change any part of the rest of the setup or would you leave it as is?