Sanity check on all flash ZFS build. Solaris 11.2, SuperMicro chassis, Samsung 850 Pro 1TB SSDs...

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Hadrien

New Member
Jun 18, 2014
17
5
3
44
Posting for a quick sanity check for a build we are thinking about doing for a bulk of our production VMs to run on.

We have had a similar build running for over a year which hosts all of our VDI VMs through NFS storage on Soalris 11.2 with napp-it Pro.

  • SUPERMICRO 2027R-AR24NV Chassis (Supermicro | Products | SuperServers | 2U | 2027R-AR24NV)
  • Quantity 8 - Kingston 16GB 1600MHz DDR3 ECC Reg (128GB RAM total, thinking about going to 256 or even 512)
  • Quantity 1 - Xeon E5-2609 v2 2.5GHz Server CPU
  • Quantity 20 - 1TB Samsung 850 Pro SSDs
  • Quantity 1 - 256GB Samsung 850 Pro for Solaris 11.2 OS
  • Intel X520-DA2 10Gb Ethernet
This would be hosting about 110 Server VMs across 5 ESXi hosts, accessed via NFS, all interconnected with 10Gb Ethernet. Our current build is a 20 drive / 10 spindle mirrors ZFS array with 3TB SAS near line disks, a 200GB L2ARC, and a 8GB ZeusRAM drive for the ZIL. Do you guys think we would still need a ZIL for an all SSD array like this?

The build we have is working well for VDI, but for the mixed workload we will be tossing at it with production servers we are wondering if a ZIL would be necessary. I know that we cannot use a ZeusRAM due to the 2.5" drive format and I have not seen anything similar to a ZeusRAM that would fit in the 2.5" SuperMicro chassis. NVMe in a PCIe slot maybe? Really just looking to see if anyone else has done a build like this and found that they needed a ZIL. Thanks in advance.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,512
5,800
113
What about a Samsung SM863 instead? 960GB v. 1TB but at least you gain PLP.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I asked before about ZIL over SSDs and someone suggested that the it may be able to help coalesce the metadata operations and of small writes. I just threw in a 20GB partition for it.

What do you have for controllers? That's a lot of bandwidth.
 

abstractalgebra

Active Member
Dec 3, 2013
182
26
28
MA, USA
What about a Samsung SM863 instead? 960GB v. 1TB but at least you gain PLP.
The Samsung Enterprise SSD SM863 is optimized for enterprise workloads and also has Power Loss Protection (PLP) with a 5 year warranty for enterprise usage. It would be wise for your application to consider.

Samsung has released a new SSD series aimed at both enterprise as well as smaller data centers, the SM863. The SM863 is a write-intensive drive that is built for high-endurance, high-performance, and low power consumption. The SM863 sustained IOPS performance, built-in thermal guard protection to prevent overheating, and comes with tantalum capacitors for power loss protection. On the performance claims, the SM863 is claimed to reach speeds of up to 520MB/s on read and 485MB/s on write with a read performance up to 97,000 IOPS. The drive comes in a high capacity of up to 1.92TB and comes with a 5-year warranty.
Samsung SM863 SSD Review | StorageReview.com - Storage Reviews
 
Last edited:

Tobias

New Member
Jun 15, 2015
15
4
3
Sweden
I agree with abstactalgebra and Patrick , you should absolutely use SSD's with Power Loss Protection for your production VM's .
 

Hadrien

New Member
Jun 18, 2014
17
5
3
44
What about a Samsung SM863 instead? 960GB v. 1TB but at least you gain PLP.
I keep forgetting about the enterprise offerings from Samsung. We have had good luck with the 850 Pro and the price difference is significant enough for us to consider using it over the SM863. However I may put a note out to a few of our VARs to see what pricing on the SM863 is now. The extra peace of mind is certainly worth it for a production environment.

Both iSER and SRP offer much better performance with RDMA.
https://forums.servethehome.com/index.php?threads/ipoib-vs-srp.5048/
I will have to check that out. I have not been keeping up on my ZFS connectivity news apparently. :)

I asked before about ZIL over SSDs and someone suggested that the it may be able to help coalesce the metadata operations and of small writes. I just threw in a 20GB partition for it.

What do you have for controllers? That's a lot of bandwidth.
The SuperMicro chassis comes with 3 - AOC-S3008L-L8i (Super Micro Computer, Inc. - Products | Accessories | Add-on Cards | AOC-S3008L-L8i which are the same as an Avago 9300-8i 12Gb controllers (SAS 9300-8i Host Bus Adapter
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Network will be your bottleneck. Why nfs?
Why NOT NFS??? Ease of config, No additional HW needed (same w/ iSCSI but that has all sorts of nuisances), Single session per connection aleviated by using multiple stub/unrouted VLAN's paired w/ unique subnets (hypervisor/stg array side config to scale), NFS v4.1 (pNFS) when session trunking 'arrives' to the masses is gonna BLOW folks away, thin provision by default, expand/decrease NFS volumes on the fly, lower level/finer grained control of snapshot outside of VMware (supported array of course), no VMFS overhead/layer/another level of inception/encapsulation (I can access my NFS volumes natively w/ common OS tools and vmdk's and even perform a self-service file level restore capability through several methods), No FC switches, zoning, fabric mgmt, HBA's, LUN's to manage (just the last one if we are debating iSCSI), no single disk I/O queue, no LUN resignaturing for DR purposes as is required w/ iSCSI/FC...I'll stop there but I could go on, those are the benefits/highlights of deploying Virtual Infrastructure on NFS
 
  • Like
Reactions: Monoman and Tobias