Good High Performing Enterprise NAS w/ 10GB & ability to have 10TB+

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

smccloud

Member
Jun 4, 2013
325
12
18
I just got tasked with finding a good commercial options that supports 10GB, 10TB+ storage & is high performing. I am thinking a ZFS bases solution using SSDs, but I need some help for ideas on what ones to look at. I know there is iXsystems & nexenta that have commercial offerings w/ support but are there any other that don't require me to request a quote? I.e. do a price online.
 

smccloud

Member
Jun 4, 2013
325
12
18
Was hoping for something slightly less expensive. No one in our implementation group knows how to configure a NetApp either.....
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
How would you describe your requirements in terms of the balance between price, capacity, small IO performance, large IO performance, reliability, and easy of use? Even better, talk about how much disk space you need and what you'll be using it for.
 

smccloud

Member
Jun 4, 2013
325
12
18
I was told not to worry about price right now.
Needs to have at least 4TB of capacity, but be expandable.
Small & Large IO performance need to be as good as possible.
Needs to be very reliable & highly available.
Needs to be as easy to use as possible for our support department.
Will be used to store map data & image tiles for the data (as generated by ArcGIS for Server). Servers connecting to it (8 per datacenter) will be serving a Silverlight Web App to an E9-1-1 call center for plotting on 9-1-1 calls.

Currently, I am leaning towards two Synology RS10613xs+ with 32GB RAM, a dual port 10GbE adapter, 10 2TB Western Digital RE HDDs in RAID 6 (since Synology cannot expand RAID10 volumes) & 2 Intel 520 240GB SSDs per datacenter.
Each server would have a dual port 10GbE adapter & there would be two 10GbE switches for redundancy purposes.
Still don't know which 10GbE connector we would need, but that can be determined in the future unless someone has a strong reasoning to chose one over the other.
 
Last edited:

Mike

Member
May 29, 2012
482
16
18
EU
Expanding a live system may require downtime or serious overhead. Remember that it will take time (resources) to expand, unless you're doing pools of data where you hot plug a set and expand the logical volume like LVM or ZFS.

Also performant, highly available, distributed and reliable will require more than two Synolo-whatevers.
 

smccloud

Member
Jun 4, 2013
325
12
18
Expanding a live system may require downtime or serious overhead. Remember that it will take time (resources) to expand, unless you're doing pools of data where you hot plug a set and expand the logical volume like LVM or ZFS.

Also performant, highly available, distributed and reliable will require more than two Synolo-whatevers.
We are open to ZFS solutions. However they will need to have full support (i.e. not be built by me).

Condre Storage offers a Nexenta solution, bonus is that they are in the same state we are which would help w/ delivery for in house setup before being shipped to the customer site. However, I have never had any dealings with them.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I was told not to worry about price right now.
Needs to have at least 4TB of capacity, but be expandable.
Small & Large IO performance need to be as good as possible.
Needs to be very reliable & highly available.
Needs to be as easy to use as possible for our support department.
Will be used to store map data & image tiles for the data (as generated by ArcGIS for Server). Servers connecting to it (8 per datacenter) will be serving a Silverlight Web App to an E9-1-1 call center for plotting on 9-1-1 calls.

Currently, I am leaning towards two Synology RS10613xs+ with 32GB RAM, a dual port 10GbE adapter, 10 2TB Western Digital RE HDDs in RAID 6 (since Synology cannot expand RAID10 volumes) & 2 Intel 520 240GB SSDs per datacenter.
Each server would have a dual port 10GbE adapter & there would be two 10GbE switches for redundancy purposes.
Still don't know which 10GbE connector we would need, but that can be determined in the future unless someone has a strong reasoning to chose one over the other.
I'm getting it. So eight servers will be using the storage. How many clients (e.g. web sessions) will those eight servers handle at once? I'm trying to get an idea of roughly how much IO is expected.
 

smccloud

Member
Jun 4, 2013
325
12
18
I'm getting it. So eight servers will be using the storage. How many clients (e.g. web sessions) will those eight servers handle at once? I'm trying to get an idea of roughly how much IO is expected.
Based on my memory (and it is foggy), I believe it will be up to 1000 users if one data center goes down.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Based on my memory (and it is foggy), I believe it will be up to 1000 users if one data center goes down.
You should take all free advice, especially that rendered with only a few sentences of input, as food for thought and nothing more. That said:

With 1000 911 clients, I would consider uptime to be essential. For that reason, I'd want a cluster - two NAS devices with replicated or at least duplicated content, each of which could handle the load should one go down. It also seems like you have more than one datacenter. If so then I'd consider having one cluster per datacenter, optionally with replication across data centers if required - map tiles are pretty static, so simply duplicating the data might be good enough.

Luckily, these would be pretty modest clusters. Your disk space requirements are very small - single digit TB - and you don't appear to need some of the really fancy features offered by the higher end NAS vendors. It seems like you need a smallish 10GbE NAS, with replication, able to service just eight or so clients each, but with each client potentially sending a very large number of small requests at a time. The usage seems like it'll be read mostly, which gives me an idea.

I like your idea of 10GbE, though you may not need the network redundancy if you go with a cluster. I like your idea of an SSD cache, but I would consider taking it even further. If your app is indeed read mostly, then pushing as much data as possible onto SSD will greatly improve response time, and actually reduce the load on the NAS devices. In other words, have you thought about going all SSD?
 

smccloud

Member
Jun 4, 2013
325
12
18
You should take all free advice, especially that rendered with only a few sentences of input, as food for thought and nothing more. That said:

With 1000 911 clients, I would consider uptime to be essential. For that reason, I'd want a cluster - two NAS devices with replicated or at least duplicated content, each of which could handle the load should one go down. It also seems like you have more than one datacenter. If so then I'd consider having one cluster per datacenter, optionally with replication across data centers if required - map tiles are pretty static, so simply duplicating the data might be good enough.

Luckily, these would be pretty modest clusters. Your disk space requirements are very small - single digit TB - and you don't appear to need some of the really fancy features offered by the higher end NAS vendors. It seems like you need a smallish 10GbE NAS, with replication, able to service just eight or so clients each, but with each client potentially sending a very large number of small requests at a time. The usage seems like it'll be read mostly, which gives me an idea.

I like your idea of 10GbE, though you may not need the network redundancy if you go with a cluster. I like your idea of an SSD cache, but I would consider taking it even further. If your app is indeed read mostly, then pushing as much data as possible onto SSD will greatly improve response time, and actually reduce the load on the NAS devices. In other words, have you thought about going all SSD?
I wish the map tiles were static, unfortunately most of our customers update their map at least once a month if not daily. Currently we anticipate recaching the map on a weekly basis. For that reason, I am leery of suggesting an all SSD solution.

My current thought was to treat each data center as a stand alone solution. Redundant NASes (using whatever method is built in for clustering), dual 10GbE switches with each NAS/Server having one connection to each switch & one connection between switches. I am not sure of the best share method to use, but I anticipate either CIFS or NFS since IIRC the iSCSI Initiator in Server 2008R2 is not the most performant.

Has anyone ever had any experience with Condre Storage or ixSystems?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I wish the map tiles were static, unfortunately most of our customers update their map at least once a month if not daily. Currently we anticipate recaching the map on a weekly basis. For that reason, I am leery of suggesting an all SSD solution.

My current thought was to treat each data center as a stand alone solution. Redundant NASes (using whatever method is built in for clustering), dual 10GbE switches with each NAS/Server having one connection to each switch & one connection between switches. I am not sure of the best share method to use, but I anticipate either CIFS or NFS since IIRC the iSCSI Initiator in Server 2008R2 is not the most performant.

Has anyone ever had any experience with Condre Storage or ixSystems?
I would consider data that changes daily to monthly to be pretty static. Even if you wiped all data and replaced it every day, that still wouldn't cause me to stay away from SSDs. Imagine deploying 10 800GB Intel S3500 drives configured as RAID10 for ~4TB of space. You'd get some 2.5 Petabytes of endurance by Intel's very conservative standards, which is 4TB written per day for far longer than you'll need for around $8K. Double it to $16K and you get 10x that endurance.

That said, if the app uses only a small portion of the available maps per day then a simple SSD cache might provide almost as much benefit.

The thing I like about SSD NAS is this: With every request taking so very little time to complete, loads go down dramatically. With so little pressure on the NAS, reliability goes up and even less than stellar hardware hums along just fine.

Anyway, I like the Synology units for their simplicity. The one you are looking at, assuming you load it up with SSD or at least SSD cache, would probably work, but I would consider it a budget choice, not one made because it's the absolute best system. If cost was not an issue, I'd stick with a vendor with more of an enterprise focus. I'm a ZFS fan, so a pre-built and vendor supported ZFS system with lots of cache or all SSD would be very appealing. If you had serious performance needs, I'd have you look at the Sun ZFS boxes, but that's serious overkill.
 
Last edited:

smccloud

Member
Jun 4, 2013
325
12
18
I would consider data that changes daily to monthly to be pretty static. Even if you wiped all data and replaced it every day, that still wouldn't cause me to stay away from SSDs. Imagine deploying 10 800GB Intel S3500 drives configured as RAID10 for ~4TB of space. You'd get some 2.5 Petabytes of endurance by Intel's very conservative standards, which is 4TB written per day for far longer than you'll need for around $8K. Double it to $16K and you get 10x that endurance.

That said, if the app uses only a small portion of the available maps per day then a simple SSD cache might provide almost as much benefit.

The thing I like about SSD NAS is this: With every request taking so very little time to complete, loads go down dramatically. With so little pressure on the NAS, reliability goes up and even less than stellar hardware hums along just fine.
Very true.

As it is part of this solution, I am under the impression that 10GbE SFP+ is better than 10GbE BaseT. Is this right?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Very true.

As it is part of this solution, I am under the impression that 10GbE SFP+ is better than 10GbE BaseT. Is this right?
It's not a very strategic choice and there isn't a big difference between the two. For 10Gb, SFP+ is more much common in the datacenter than BaseT right now, but BaseT share is growing. SFP+ does have lower latency, but not by so much that you'd notice the difference in your application. BaseT used to consume much more power, but has gotten much better and there isn't a big delta when you are looking at the current generation of equipment.
 

smccloud

Member
Jun 4, 2013
325
12
18
It's not a very strategic choice and there isn't a big difference between the two. For 10Gb, SFP+ is more much common in the datacenter than BaseT right now, but BaseT share is growing. SFP+ does have lower latency, but not by so much that you'd notice the difference in your application. BaseT used to consume much more power, but has gotten much better and there isn't a big delta when you are looking at the current generation of equipment.
Ok, then I won't worry about it.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
8x latency per hop for base-T. plus a ton more energy per hop. Not everyone is doing 2 hops (Back and forth). The extra latency was challenging for many to do FCoE and Data Center Bridging since the latency was beyond the ability to guestimate (ETS) for Class of Service (VLAN flow control).

Which is why intel is about the only one I can think of that has FCoE with base-T. everyone else said fuggedaboutit.

hop hop hop it adds up!
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I can see how +2 microseconds of latency, times a few hops, might be significant when the payloads are tiny and coming at you rapidly as in MPI, but in a NFS NAS? I remember a marketing document where NetApp was bragging about 3,500 microseconds total latency in their new $2 million NAS offering. Of course they called it 3.5 milliseconds, but the numbers are the same.
 

smccloud

Member
Jun 4, 2013
325
12
18
Well, ixSystems uses SFP+ in their solutions. BTW, really expensive for an all SSD solution from them. Although I think a standard ZFS setup with a ZIL, L2ARC & butload of RAM will be good enough.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Latency does limit your peak bandwidth unless you are doing pure large block sequential work.

I suppose if you look at it from the perspective of the NFS server, with 100 clients, the latency could be additive as far as buffer fill (which could add latency). Once buffers congest packets drop and big mess with delayed ack/windowed protocols.

Plus the power of base-T is ridiculous