10 Gbit NAS for home network limited by SATA data limit

no com

New Member
May 2, 2019
6
0
1
My home network is a 1 Gbit setup with a QNAP NAS and ditto switch. This works fairly well but is rather slow when I process large amounts of data. I bought a dedicated SSD for my workstation, that works, but what I actually want is a fast NAS that delivers data over a fast network. My workstation already has a 10 Gbit network card and I want to upgrade the NAS and the switch. Theoretically I could speed my network access 10-fold.

The problem is in the NAS. I currently have a QNAP TS 253 pro and I can upgrade to for example a TVS-951X, containing 5 bays for 3.5" units and 4 2.5" bays for M.2 units. The trouble is that all bays have a SATA III connection which limit the throughput to about 6Gbit/s. When I see reviews of this thing this neatly comes out. This is a waste for the M.2 disks as they could easily get to the theoretical maximum speed of 10 Gbit/s.

Is there any way to overcome this speed limit or are there systems that connect directly to the M.2 interface bypassing the SATA? I am looking to systems with a Linux type OS.
 

gea

Well-Known Member
Dec 31, 2010
2,655
909
113
DE
You can consider two points,

1. sequential performance
A single Sata 6G disk can give you around 500 GB/s. In a Raid-0 you stripe the disks that gives you an overall sequential performance (theoretical maximum) of n x disks so no problem to achieve 10G sequential performance with up from 2-3 disks in a Raid-0.

As in a Raid-0 all data is lost on a disk failure you use other Raid levels with redundancy (Raid-10/5/6/Z1-3). If you use an NVMe, main advantage is that even a single NVMe can give 10G performance.

1. Random performance (iops)
This is not so dependent to the interface (Sata, SAS, NVMe) but more dependent on the type of disk or Flash. A single mechanic disk has around 100 read/write iops. Flash (in SSD or NVMe) can give high iops (50-300k read iops, 10-300k write iops). On writes these values are mainly on new and empty Flash. On steady write it can go down massively on Desktop Flash disks. Good enterprise Flash can quite hold the write rate on steady write.

The "best of all NVMe" Intel Optane does not suffer from a degration. It can hold up to 500k write iops even under steady load.

An additional performance relevant item is cache, for slow disks this can be an SSD to improve iops. On better NAS systems, mainly ZFS cache (readcache and writecache) is RAM based what gives a much better result. For ZFS 4-8 GB is suggested as a minimum. On good ZFS systems it is common to deliver over 80% of all reads from ramcache.

If you are looking for a newer system with a general use OS (Linux/Unix), I would suggest an entry class Intel X86 server (built your own with SuperMicro X11 mainboards, see UP Xeon Motherboards | Motherboards | Products - Super Micro Computer, Inc. - optionally with 10G onboard or entry level socket 1151 servers from Dell, HP, Lenovo, Supermicro) with a serverclass mainboard that gives ECC RAM and optionally IPMI (remote control via Webbrowser). CPU wise, you can use a cheap Celeron, i3 or G44xx for a SoHo system. RAM is more important to performance.

Best of all regarding data security but also with very good performance are ZFS based systems, available on storage centric Solarish (where ZFS comes from), Free-BSD (where it is for a long time) and Linux (quite new there). If you are looking for a "Qnap alike" webmanaged NAS appliance, lock at Free-BSD based ones like FreeNAS or XigmaNAS or Solaris based ones (Oracle Solaris with native ZFS or OpenIndiana or OmniOS with Open-ZFS - same ZFS like in Free-BSD and Linux). Solaris and OmniOS or OpenIndiana are enterprise class regular Unix operating systems without a webmanagement. For them you can add my napp-it (free edition, sufficient for a SoHo NAS)
 
Last edited:
  • Like
Reactions: William

BoredSysadmin

Active Member
Mar 2, 2019
569
189
43
One more important point to remember is Raid protection isn't free. It has cost in both disk space [some of used for parity data and/or mirrored data] and in write performance. Raid10 should be considered if you desire both good write speed and have data protection from disk failure. The problem is Raid 10 you lose half of your disk for mirrored data.
Raid 5 provides approx x0.9 write performance of a single disk in entire array and Raid 6 is approx x0.7 write speed of entire array. As gea mentioned Raid 0 by is the fastest but is the highest risk of data loss - a single disk failure = loss of entire volume data.

Buying SMB NAS which could deliver a consistent 10gig is not going to cheap.
TVS-882ST3 - Features - QNAP (US)
 

no com

New Member
May 2, 2019
6
0
1
Thanks for your help @gea and @BoredSysadmin! I understand now that it is possible to have double the data speed of SATA with RAID 0 by striping and I assume that a RAID 10 has essentially the same speed. That explains why the data speeds I see can be 1,2GB/s for the QNAP tvs-882ST3, that is, in the QNAP material. The reviews of servethehome.com show something different (I took the first three examples I could find on this site):
All these reviews top off at about 600MB/s for RAID 0, 5 and 6, just the SATA III top speed which is about half the (theoretical) top speed of 10GBe. I am just curious: if it is true that RAID 0 can almost double the speed of data transfer (and QNAP material seems to support this), I understand the reasoning behind that. But how come this does not show up in the benchmarks? Could somebody point me to some benchmark results?
 
Last edited:

BoredSysadmin

Active Member
Mar 2, 2019
569
189
43
If you notice all 3 of these review units are powered by the low end or an embedded low power CPU. None of these is really high-performance NAS and likely raid-0 performance is limited by cpu and it's sata controller. consider if the sata controller is only using a single 5gig link to the cpu - it would never be able to saturate the 10gig network.
 

ttabbal

Active Member
Mar 10, 2016
767
209
43
44
10Gbe is still not "mainstream", so consumer targeted devices don't really bother to try to match that level of performance. They use cheap slow CPUs and controllers, which is fine for 99% of the people looking at them. If you want 10Gb performance, you need to bump up to better hardware. It doesn't need to be high end enterprise gear, but it does need to be decent. You also need a wide stripe for mechanical disks to maintain that speed. Particularly if there is any other activity. A seek will really hurt performance.

Just to give a data point, this is my setup.

2x Xeon 5675
96GB RAM
3x H310 HBAs
20 HDDs in mirror pairs on ZFS (RAID10 equivalent)
Mellanox ConnectX-2 cards with OM3 fiber
S2500 switch

I have verified that it can saturate 10Gbe on sequential read. I haven't tested everything else yet. I don't expect it to do so on writes, even local write benchmarks top out at about 900MB/s. Locally, I get 2x that on sequential read. As for random, it's rust, it's slow at random. ZFS caches help here, but there's only so much you can do. If you want fast random performance, you need SSDs.

Consumer NAS makers don't bother trying as it gets expensive fast and they won't sell that many. Heck, most consumers don't even have wired networks, let alone 10Gb. It doesn't take much to saturate wifi. :)
 

gea

Well-Known Member
Dec 31, 2010
2,655
909
113
DE
I understand now that it is possible to have double the data speed of SATA with RAID 0 by striping and I assume that a RAID 10 has essentially the same speed.
A raid-10 is a Raid-0 of two mirrors. This is why a Raid-10 doubles write performance compared to a single disk. As a good Raid-10 can read simulaniously from both parts of a mirror, read performance can be 4 x a single disk.
 

no com

New Member
May 2, 2019
6
0
1
Thanks for your answers. I agree with @ttabbal and @BoredSysadmin that hardware matters and the systems I list are not top-notch but I would like to point out two observations:
  1. I suggest you take a look at all servethehome reviews of qnap, synology, asustor and netgear NAS servers with a 10 GbE connection. You'll see that in all cases the performance reaches its max of about 600MB/s for raid 0, 5 and 6. Assuming that higher speeds are possible and are likely as @gea suggests then this shared peak is strange, especially because it coincides with the SATA III peak limit
  2. QNAP lists for the 10 GbE read throughput of the TS-473 with expansion card 1173 MB/s for read and 882 MB/s for write, for the TS-951 the results are about 200MB/s lower. Of course this is a vendor who wants to show its products in a favorable light, but a deviation of a factor 2?
Oh well, as a poor home user I can only drool at @ttabbal's system :). I think I just try a QNAP and build a RAID 10. If everything fails then even a mere 5-fold speedup will help my projects. Where do you see such speedups in the computer world nowadays? Thanks a lot guys for your help.
 

ttabbal

Active Member
Mar 10, 2016
767
209
43
44
Nice of you to say, but as a fellow poor home user, I think you're wrong. :) My gear is all 10 year old stuff bought cheap over time. Nothing super expensive. Sure, when it was new it was stupid expensive, but if you watch ebay and such, you can get good deals on all of it. One thing that does work in my favor is that I have a spot I can place machines that I don't have to care about noise. So I can use old rackmount gear without having to change fans and such. Even then I had a few people calling me out on old gear using way more power etc.. Never mind that the price difference for next step up was enough that I can power this stuff for a decade. :) My pile of HDDs is mostly 2TB as well. :)

I suspect that at least some of those NAS devices are using SATA port multipliers and running all the drives over a shared SATA connection or as someone mentioned a single 5Gbps link to the CPU. It's a pretty common thing with these embedded CPUs. I took a quick scroll through the review for the TS-473. They tested with HDDs, much like a real user would use. So it's possible they just maxed out those drives. You could try a set of SSDs to see if you can get past that and see where the bottleneck is. The manufacturer likely managed to get that high of a number one time with a specialized setup. A bit like HDD/SSD manufacturers mentioning the speed to cache. Sure, you can get that, for a few milliseconds. :)

All that said, those little things do a decent job for the price. Even though you aren't going to saturate a 10Gb connection, you still get 5x the speed of the 1Gb link. Not shabby as an upgrade path. Particularly if you can use cheap off-lease cards. It's not hard to get older Mellanox cards for $20 these days. A pair of cards, transceivers, and a fiber patch cable can be under $100 easy, half that with a deal or two in the mix. And for bulk storage this sort of thing works alright. Even the nicest setup with rust backing it isn't going to touch a modern nvme drive though. I do a smaller nvme for my workstation and leave the big stuff on the NAS. Works out well for me.
 

BoredSysadmin

Active Member
Mar 2, 2019
569
189
43
I'd highly recommend for shorter runs forget separate SFP+ transceivers and just use DAC (direct attached cable) aka TwinAx. It's a basically a fixed length (usually) copper cable certified to provide 10gig speed with built-in transceivers. This path is typically much, much cheaper.
SOMEONE ;):rolleyes: mentioned than building your own NAS doesn't have to be expensive, and who am I to argue since I built my own freenas server. Originally with x6 3.5 drives, but now expanded with PCIe External SAS HBA card to connect to disk shelf with another 12 drives.
This disk shelf plus the controller and SAS cables was under $170 delivered. But it's is bulky and loud :) I don't mind since I have a basement for these things.
My original 6 disk NAS is extremely quiet. I've used Fractal Design case with built-in sound damping materials applied, low noise fans and low power CPU.
 
  • Like
Reactions: itronin and Mike W

no com

New Member
May 2, 2019
6
0
1
@ttabbal Yes they tested with HDD's, but that shouldn't max out all traffic in all reviews at 600MB/s. I didn't know there was something like SATA port sharing. That might explain the max speed of 600MB/s. @BoredSysadmin, you mention the direct cable, i'd considered that as well but I wondered how to handle the extra network traffic. That could still go into the 1 Gbit connection, but in that case there are two ways my NAS traffic can enter into my workstation: via 1Gb and via 10Gb. Wouldn't that slow the whole business down? Wouldn't it be better to buy a switch with two 10Gb connections and some 1Gb connections, connect the NAS and workstation to the 10Gb ports and the new switch with the main switch to the 1 Gb port? As you can see I'm really a noob on these matters :)

Don't tempt me to build my own NAS :). I already build my own workstations in order to get precisely what I want: a very fast machine. I'm less picky with NAS as my QNAP's really do a good job, except for speed but that is due to 1 Gbit. I kinda solved that by using an nvme as cache for my virtual machines and research data but I really want al my data in one spot. Much easier for maintenance and backups. Thanks for sharing your views on the matter!
 

EngChiSTH

Member
Jun 27, 2018
56
19
8
Chicago
what stops you from simply getting a QNAP with 10G connectivity since you are already in this ecosystem?

i.e. I use QNAP-332X (reviewed both here on STH and on storagereviews) that with 3 HDDs in RAID-5. Storage review has an article where they tested this small NAS to to 1GB/s in both read and write. I connect it to Brocade 6450 switch using AOC (cheap from ebay) , put in M2 drive (in one of the three slots TS-332X has) for cache and am pretty happy. hard to beat on supportability, power consumption, features, and ecosystem. An no, my time is not worthless (aka 'free') to screw around with ZFS and then be stuck supporting it..

if this is too small for you then yes there are other options from QNAP and other vendors with higher drive count, etc.

the way you ask the question is confusing - you know that the whole point of such system is aggregating multiple drives (for speed and redundancy purposes). so who cares if single drive is SATA3 when you have 3+ of them, you are going to exceed your 1Gb throughout immediately and then level higher speed connectivity you've got..
 

BoredSysadmin

Active Member
Mar 2, 2019
569
189
43
@no com - just forget about limitations of a single drive. It's not the show stopper. In the raid system with multiple drives there. each drive will contribute some of the reading/write performance. It's up to the raid controller or raid software to be designed in a way to be nonblocking.
As we have mentioned many times and you getting stuck the same 600MBps point - Average mechanical drive read speed is about 200MBps in sequential read. This is the best case speed. Read like performance is unlikely to be 100% sequential reads. On top of that, the design of cheaper NAS boxes is limited by CPU/Controller/Software bottlenecks.

As for direct cables, I don't understand your concern. It's a full 10gig speed cable. No ifs and no buts. The only limit is the length of the cable. This should not be an issue for a home lab.
 

no com

New Member
May 2, 2019
6
0
1
Sorry for the confusion, but extra options just create confusion for a noob like me :). I already ordered a QNAP 473 with an expansion card for the 10Gb option. This seems to have the best results for writing MB/$ for what its worth. I wasn 't really stuck at the 600MB/s though I realize I might have given that impression. I was easily convinced by the first arguments of @gea. That was enough reason to buy some NAS. So thanks for all your contributions, they helped me to decide to buy a NAS!
 

Mike W

Kuntrolphreak
Jun 29, 2018
75
29
18
Suffolk, Va
So the expansion cards from QNAP are expensive. Especially the QNAP branded. I have the TVS-1282 and just set up 10gb for and 2 computers on my network for under $100. I have 4 1tb ssd’s In the four 2.5 bays in a raid 10 configuration. Steady transfers at 1.1-1.2 GB/s. On a 22gb file. You have any questions feel free to hit me up.
 
Last edited:

Mike W

Kuntrolphreak
Jun 29, 2018
75
29
18
Suffolk, Va
What are these SSD make/models? I looked and I can't see any 2.5 SATA SSDs over 8tb
Was typing on my phone and fat fingered something. I have 4 1tb SSDs. In a raid 10. Honestly the only reason I had them before moving to 10gbe network was really to add the additional storage space. You will get a lot of different answers up here in terms of raid. Really boils down to what your needs are and how much you want to spend. Also your transfers are only as fast as your slowest device. I now use the ssd raid 10 for transfers to and from the NVMe drive in my PC. I recommend trying all of the options out. A lot of consumer systems actually have raid built right into the mother board. And windows offers a raid solution. Try out the different configs and see the difference for yourself.
Also there is plenty on the internet to help you also.
With a ssd raid 10 array using sata3’s you will Saturate a 10gb network.
Free RAID Calculator - Caclulate RAID Array Capacity and Fault Tolerance.
https://www.servethehome.com/raid-calculator/
RAID Capacity Calculator - WintelGuy.com

Good luck.
 

Mike W

Kuntrolphreak
Jun 29, 2018
75
29
18
Suffolk, Va
Also just looked up the specs and that model doesn’t have m.2 Sata connections. It comes with four 2.5” drive bays. If you get m.2 Sata drives for it you have to get drive adapters to install them.
 

Mike W

Kuntrolphreak
Jun 29, 2018
75
29
18
Suffolk, Va
Also just looked up the specs and that model doesn’t have m.2 Sata connections. It comes with four 2.5” drive bays. If you get m.2 Sata drives for it you have to get drive adapters to install them.
I was looking at the original model from earlier in the thread. Sorry. The TVs-473 has the m.2 connections.
Nice choice
 

no com

New Member
May 2, 2019
6
0
1
@Mike W, Thanks for the offer! They seem to have to import it so I must wait till next week but it's great to know beforehand that it might work :)