Throughput of D-1500 series in SAN or NAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kal G

Active Member
Oct 29, 2014
160
45
28
44
Intel's Xeon D looks like a great option for a small SAN or NAS device. However, specifications aren't indicative of real world performance.

Is anyone using the D-1540/20 for a SAN or NAS? If so, what kind of transfer rates are you seeing? For comparison purposes, please include CPU type, read/write throughput, protocol, link speed, OS, hypervisor (if applicable), drives, and RAID format (if used).
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
We have a D-1540 (X10SDV-TLN4F ?) and when we were testing with RAID 10 SSDs (Samsung 960GB forgot the number) we had no problem filling a 10 Gbase-T pipe at over 800MB/s just in casual testing. We didn't even bother with doing more network tuning.

Here's why I'm confused on the ask here. Many of the SAN/ NAS boxes from big vendors use E5-2609's or similar. Here's the specs for the V3 part (6C 1.9GHz no HT/ Turbo). The V2 version was 4C 2.5GHz. Xeon D is easily in this frequency and core count range, just with Broadwell (newer) versus Haswell or Ivy Bridge. So you've got plenty of CPU power, you've got onboard 10Gb using an Intel controller/ driver. I think the only reason this is not the best NAS platform is that it's only got 6 SATA ports from the system.
 
  • Like
Reactions: Kal G

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
I think are right on the D-1540 being powerful enough for NAS/SAN even with 10Gbe + NxSSD.

However - when comparing the D-15xx to E5 don't forget that there are still significant advantages in the E5 part, even at the bottom end of the performance scale. You get 4-way memory interleave on the E5, more cache-per-core, and a more effective cache architecture.

If its pure NAS/SAN or even simple router (pfsense) then these differences will make little difference. But it could be a difference maker if you are doing anything that requires you to act on the data (e.g., an IDS/IPS like Suricata that needs to crack open most of the packets and look at them).
 
  • Like
Reactions: Kal G

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,828
113
I think @MiniKnight and @PigLover are right. The Broadwell-DE is the updated core from what you can get today in the E5 line but you do only get 1.5MB L3 cache.

Realistically though, these parts have a lot more power than most NAS/ SAN stuff will need even if you are doing encryption/ compression.
 
  • Like
Reactions: Kal G

Kal G

Active Member
Oct 29, 2014
160
45
28
44
Thanks, that helps a lot. The reason I asked is that I was seeing really poor throughput (< 90 Mbps) on one of our file servers built off an X10SDV-TLN4F. I was looking for some baselines for comparison in case I'd overestimated the performance of these processors.
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
Thanks, that helps a lot. The reason I asked is that I was seeing really poor throughput (< 90 Mbps) on one of our file servers built off an X10SDV-TLN4F. I was looking for some baselines for comparison in case I'd overestimated the performance of these processors.
You can get better performance than that off a celeron, so something's wrong.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,828
113
Thanks, that helps a lot. The reason I asked is that I was seeing really poor throughput (< 90 Mbps) on one of our file servers built off an X10SDV-TLN4F. I was looking for some baselines for comparison in case I'd overestimated the performance of these processors.
Are you sure it is over the 10Gb NICs not the 1Gb ones?
 

Kal G

Active Member
Oct 29, 2014
160
45
28
44
Are you sure it is over the 10Gb NICs not the 1Gb ones?
It is the 10Gb NIC, but is running to a 1 Gb switch port. However, based on internal tests, and VM to VM transfers (this is an ESXi 6.0 box), I was expected speeds on gigabit ethernet somewhere in the 200-400 Mbps range. Before somebody asks, the capitalization on those rates are correct. This isn't a MBps vs Mbps issue.

I've tried using one of the 1 GbE ports with the same results. Different transfer protocols (CIFS, SCP, FTP) have made no difference. I still need to rule out a few more things such as drivers and the network equipment. I'll let you know what I find out in case it helps somebody else in the future.
 

Kal G

Active Member
Oct 29, 2014
160
45
28
44
Wrapping up this issue, it turns out the problem was not the NAS, but the embedded IPS on the firewall. We're using a Cisco 5506-X in place of a router and the FirePower IPS software that it runs monitors traffic between the data and server subnets. It apparently is struggling to keep up with traffic above 100 Mbps. Shutting down the IPS module brings speeds back to expected values. For reference, the FirePower version is 6.0.0-1005 and is only rated to handle a maximum of 125 Mbps.

Serves my right for assuming the newest piece of equipment, the NAS, was the problem. I should know better by now. Thank you all for your assistance and feedback.

To answer my own initial question in case the metrics help anybody else:

CPU: D-1540
OS: Ubuntu Server 14.04 LTS
Hyperviser: ESXi 6.0U1
Drives: 8x Western Digital Red 4 TB
RAID: RAIDZ2
Link Speed: 1 Gbps
MTU: 1500

Protocol: CIFS
Avg. Throughput: 544 Mbps

Protocol: FTP
Avg. Throughput: 639 Mbps

* Note, the laptop used for testing contains an SSD to ensure the limiting factor was the server side.
 
Last edited:

mstone

Active Member
Mar 11, 2015
505
118
43
46
Before somebody asks, the capitalization on those rates are correct. This isn't a MBps vs Mbps issue.
FWIW, I find it's easiest to forestall such questions by using "Mbit/s" and "Mbyte/s" rather than hoping that everyone involved is using the same capitalization scheme.
 

mstone

Active Member
Mar 11, 2015
505
118
43
46
Wrapping up this issue, it turns out the problem was not the NAS, but the embedded IPS on the firewall.
I'd suggest that next time you do a performance test to try doing so from the same network, or at least mention that you're running through a firewall...
 

Kal G

Active Member
Oct 29, 2014
160
45
28
44
FWIW, I find it's easiest to forestall such questions by using "Mbit/s" and "Mbyte/s" rather than hoping that everyone involved is using the same capitalization scheme.
Great point.

I'd suggest that next time you do a performance test to try doing so from the same network, or at least mention that you're running through a firewall...
Agreed. I'd taken the network engineer's assurances that the throughput across subnets was in the gigabit range, but didn't verify on my own.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,828
113
That still seems a bit low but I think there might be more on the networking side that can be optimized (e.g. I am guessing you are not using jumbo frames.

I had a feeling it was network related!
 

Kal G

Active Member
Oct 29, 2014
160
45
28
44
That still seems a bit low but I think there might be more on the networking side that can be optimized (e.g. I am guessing you are not using jumbo frames.

I had a feeling it was network related!
Yes, there hasn't been any optimization. MTU is at its default of 1500.