Next NAS - Time to shed spinning disks?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
At the gym this morning I had a perhaps crazy thought: What if my next NAS no longer included a spinning disk?

On my lab NAS, STH uses about 1.5-2TB for raw photo storage, 0.5TB or so for images I keep locally (Ubuntu, Hyper-V Server 2012 R2 and etc) plus shared programs and files and another 1TB for VMs + snapshots, checkpoints and what have you exported via iSCSI.

If I add that all up, it is about 3.5TB currently for the site's needs with local storage, and it is probably padded by a bit. The VMs could be cleaned up a bit, as well as some of the local OS images. If I had 4-5TB of storage, it would probably be darn close to what I would need total for the lab NAS.

That only costs about $500 for three spindle disks in RAID 1 to store, but I need a bit more performance so add another $200 for a decent SSD. I do have a good amount of free space on the spindles and am using mostly 3TB/ 4TB drives. On the other hand, I now have a stack of 800-960GB SSDs, many of which I paid around $300 for. Using 6-7 of those would be $1800-2100 worth of drives v. $700 for the spindle + SSD, but my need for a SSD cache would essentially go away. Power consumption and noise, even with a ConnectX-2 EN and a HBA could be minimal.

Of course, this is more using the drives I already have, but it is starting to get feasible. Has anyone swapped over to an all SSD NAS yet, specifically >4 larger capacity drives?
 

lundrog

Member
Jan 23, 2015
75
8
8
43
Minnesota
vroger.com
Not on the NAS side, but I hardly sell Storage that isn't SSD these days. I would consider a ESXi box with local SSD, and a virtual nas / san on top. Then you can have good performance, and the benefits of VMware. I would consider a VMware essentials kit, and another at a remote site ($500 ish each ), and then use veeam or like technologies to replicate the data. Worse case, you fail over, and change external DNS to the DR side. ( assuming budget ) Just need a direct connect fiber provider hand off / Cross connect, or a large VPN tunnel.

NAS technology doesn't adapt that well to SDD, hence why companies like NetApp are struggling in the market. To get the most of SSD, you need a total rewrite and redesign of the storage OS. That is why EMC bought xtremio. If we had money, I would just say do a vSphere VSAN setup, but the socket cost would be rather steep, even if you run it in a VMware essentials kit.
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,421
470
83
Personally, I wish my next storage device at home is all SSD. I just cannot justify the 8x cost difference/GB. in your case Patrick, you have just enough used gear to almost justify it.

Chris
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Yes, SSD is the only way to go in small data storage systems <10TB (home use perhaps <4TB).
If you need >10TB then it's still far cheeper to SSD cache and have multiple rust disks.

It also really depends on your NAS use case. The majority of home NAS setups I've personally come across, are just media stores, which are fine to keep on rust drives.
Where there is a component of VM storage, it's done on 2x small SSD drives locally on the host device to conserve power. Don't forget that the use case of the users on this forum are generally far more advanced than that of the average NAS owner.

With that in mind, if you also have a surplus of SSD's (like Patrick) then there's no reason not to go SSD only.
 

HellDiverUK

Active Member
Jul 16, 2014
290
52
28
47
Is anyone brave enough to use RAID0 or something like DrivePool in their NAS units, assuming the better reliability of SSD compared to HDD?

I'm now having crazy ideas about 4TB of SSD storage in a tiny case like the Synology 414Slim or the QNAP 451S (which still has that Bay Trail Celeron that can do Plex transcoding).

I have 2.73TB of storage used on my server, and to be honest most of it is movies and TV shows I'll never watch again...
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Is anyone brave enough to use RAID0 or something like DrivePool in their NAS units, assuming the better reliability of SSD compared to HDD?

I have 2.73TB of storage used on my server, and to be honest most of it is movies and TV shows I'll never watch again...
Not for me. I have more than 8TB just in family pictures and movies. I would definitely want at least 1 parity disk and a backup. If the data was replaceable, I still wouldn't run without some form of parity and backup, because my time to restore any lost media is worth more to me than the cost to buy a couple extra disks.

All that being said, it would still be neat to see someone else do it :)
 

lundrog

Member
Jan 23, 2015
75
8
8
43
Minnesota
vroger.com
Not for me. I have more than 8TB just in family pictures and movies. I would definitely want at least 1 parity disk and a backup. If the data was replaceable, I still wouldn't run without some form of parity and backup, because my time to restore any lost media is worth more to me than the cost to buy a couple extra disks.

All that being said, it would still be neat to see someone else do it :)
Just remember that raid isn't data protection, make sure your have a complete backup.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Just remember that raid isn't data protection, make sure your have a complete backup.
Thanks, I mentioned that a I need parity and a backup in that quote. I fully agree that parity is not a backup. Although, it does provide for some data redundancy vs. the RAID0 that was mentioned in the previous post :) I know HellDiverUK was just speaking hypothetically anyways.

That being said, I am very paranoid about multiple backups of my irreplaceable family media. I checksum everything, I keep versions of the files. Along with these local policies, I have multiple, encrypted local copies (external drives), an rsnapshot version of the files on the server, and sync to both Crashplan's server, my colocated datacenter storage server, and to Amazon Glacier. This data is important to me, so I treat it that way.
 

lundrog

Member
Jan 23, 2015
75
8
8
43
Minnesota
vroger.com
Thanks, I mentioned that a I need parity and a backup in that quote. I fully agree that parity is not a backup. Although, it does provide for some data redundancy vs. the RAID0 that was mentioned in the previous post :) I know HellDiverUK was just speaking hypothetically anyways.

That being said, I am very paranoid about multiple backups of my irreplaceable family media. I checksum everything, I keep versions of the files. Along with these local policies, I have multiple, encrypted local copies (external drives), an rsnapshot version of the files on the server, and sync to both Crashplan's server, my colocated datacenter storage server, and to Amazon Glacier. This data is important to me, so I treat it that way.
Glad to see you play it safe, I have seen so much data loss in my day.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I don't think I would bother with an all-SSD NAS - too much overhead in all of the software layers, IP stacks, lossy-ethernet, etc. Of course it would be faster than a NAS on spinning disks, but the SSDs would be limited. IMHO if you want to go with all-flash storage you should also be trying to take as much latency as possible out of the entire storage path. Native SAS or FC to your SAN, or something on IB or 10GbE if it can take advantage of RDMA/RoCE. Regular iSCSI is just as bad as SMB/NFS.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
I would think a poor-man's storage tiering might make sense. For me it would look like this:

- All SSD for the active NAS, raid0.

- A small pool of large spinny disks with spindown so they are running low-power most of the time.
--- Spinny disks could be single disks, perhaps with some form of pooling (ala FlexRaid)
--- Just as easily you could use "real" raid groups. No matter - as long as they spin down for power management.

- A daily "replication" of the active NAS onto the spinny disks for local backup and data protection.
--- Replication could be more frequent if your risk tolerance is lower
--- For most of us, however, our Pictures/Videos/Music/DVDs just aren't that actively changing.

- 10Gbe LAN, of course, to make it make sense.

- Cloud based backup for longer term protection.

Probably ends up with 4-8 1TB class SSDs and 3-4 4TB or 6TB spinners to build it out. You don't need really fast or high-endurance SSDs for this because you'll be saturating the 10Gbe link after the first 3 in Raid0.

Personally I think this is a perfect application for the BX100 type lower-cost SSDs and the new-but-slow 8tb spinners.

You could build it out pretty easily in a case like this: Lian-Li Global | PC-Q08 and the ASUS Avoton/Marvell MB (Server & Workstations - P9A-I/C2750/SAS/4L - ASUS Or if you wanted something more tradditional you could use this Haswell MB: ASRock E3C224D4I-14S Extended mini ITX Server Motherboard LGA 1150 Intel C224 DDR3 1600/1333 - Newegg.com
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
At the gym this morning I had a perhaps crazy thought: What if my next NAS no longer included a spinning disk?

On my lab NAS, STH uses about 1.5-2TB for raw photo storage, 0.5TB or so for images I keep locally (Ubuntu, Hyper-V Server 2012 R2 and etc) plus shared programs and files and another 1TB for VMs + snapshots, checkpoints and what have you exported via iSCSI.

If I add that all up, it is about 3.5TB currently for the site's needs with local storage, and it is probably padded by a bit. The VMs could be cleaned up a bit, as well as some of the local OS images. If I had 4-5TB of storage, it would probably be darn close to what I would need total for the lab NAS.

That only costs about $500 for three spindle disks in RAID 1 to store, but I need a bit more performance so add another $200 for a decent SSD. I do have a good amount of free space on the spindles and am using mostly 3TB/ 4TB drives. On the other hand, I now have a stack of 800-960GB SSDs, many of which I paid around $300 for. Using 6-7 of those would be $1800-2100 worth of drives v. $700 for the spindle + SSD, but my need for a SSD cache would essentially go away. Power consumption and noise, even with a ConnectX-2 EN and a HBA could be minimal.

Of course, this is more using the drives I already have, but it is starting to get feasible. Has anyone swapped over to an all SSD NAS yet, specifically >4 larger capacity drives?
I'm with you Patrick. 480GB drives have gotten really cheap, and with 800 and 960GB drives popping up on eBay at amazing prices now, an all-SSD NAS is now quite appealing.
For corporate use, where price is not as important, we'll soon be seeing reasonably priced 1.6 and 3.2 GB 2.5" drives, at which point spinning 3.5" disks will start to look like paperweights.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,807
113
For corporate use, where price is not as important, we'll soon be seeing reasonably priced 1.6 and 3.2 GB 2.5" drives, at which point spinning 3.5" disks will start to look like paperweights.
Or tape replacements :)
 

Entz

Active Member
Apr 25, 2013
269
62
28
Canada Eh?
My ESXi datastores are almost entirely SSD now (4x 730 480GB drives ), not really because of need but because I can :p and will be expanding that further in the future. Lower priority VMs are still on the spinners (S3700 as a SLOG and local SSD caches help a bunch too).

I also have far too much in the way of photos/movies/cd images/etc to move everything over to SSDs . Replacing ~16TB worth of WD Reds is not cost effective yet. Same with my backup server (4x4TB), far to expensive for something like that. I guess I could stop being a hoarder and remove a bunch stuff and make it work ;)

That being said I really hope that we get good prices on 1TB+ SSDs in the future (25c/GB enterprise-ish class) I have 18 free 2.5" slots in my chassis and they are looking for some company...
 
Last edited:
  • Like
Reactions: snazy2000

TallGraham

Member
Apr 28, 2013
143
23
18
Hastings, England
At the gym this morning I had a perhaps crazy thought: What if my next NAS no longer included a spinning disk?

On my lab NAS, STH uses about 1.5-2TB for raw photo storage, 0.5TB or so for images I keep locally (Ubuntu, Hyper-V Server 2012 R2 and etc) plus shared programs and files and another 1TB for VMs + snapshots, checkpoints and what have you exported via iSCSI.

If I add that all up, it is about 3.5TB currently for the site's needs with local storage, and it is probably padded by a bit. The VMs could be cleaned up a bit, as well as some of the local OS images. If I had 4-5TB of storage, it would probably be darn close to what I would need total for the lab NAS.

That only costs about $500 for three spindle disks in RAID 1 to store, but I need a bit more performance so add another $200 for a decent SSD. I do have a good amount of free space on the spindles and am using mostly 3TB/ 4TB drives. On the other hand, I now have a stack of 800-960GB SSDs, many of which I paid around $300 for. Using 6-7 of those would be $1800-2100 worth of drives v. $700 for the spindle + SSD, but my need for a SSD cache would essentially go away. Power consumption and noise, even with a ConnectX-2 EN and a HBA could be minimal.

Of course, this is more using the drives I already have, but it is starting to get feasible. Has anyone swapped over to an all SSD NAS yet, specifically >4 larger capacity drives?
Why use SSDs? Because you can! :D

As you say Patrick you have these disks laying around anyway so make use of them.

I have a 4 x 128GB SSD RAID array in my storage server. I think it is RAID5 or RAID6. I'd have to check. This runs the OS for the server. All my other 16 disks are currently 1TB 2.5" spinny disks. I would swap them all out in a heartbeat for SSDs if I could afford it.

I personally run Adaptec RAID cards so they do pretty well with the SSDs. But my questions to you would be..... How much controller bandwidth do you have?

For example If you have an old SATA 150Gps RAID controller running on a PCIe x1 slot then there isn't really much point. But if you have a nice SATA 600 GBs or SATA/SAS RAID card that runs on a PCIe x8 v2.0 or v3.0 slot then go for it.

The big thing to remember with RAID is that you absolutely must still do backups. RAID is fault tolerance not a backup strategy. With that in mind always remember to add a "Hot Spare" disk to the array if it is important data. This is the cardinal sin most people forget.

So to summarise, if you can do it and it won't cost you anything then why not :D

Just remember to post the benchmarks speeds here please so we can all wish we had SSDs ;)
 

spyrule

Active Member
Your going to need good cpu's to really push an ssd array. Also, now your talking speeds where you'd need 10Gb network to really benefit from the speed. Ssd's are nice, but until they match $/GB for spinning disks they don't make sense until the rest of your infrastructure can support/benefit from the speed of ssd's. Lastly, ssd's don't have an unlimited life span, so you still need a good backup plan. (Crashplan from code42 is awesome).
 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
Just revisiting this. When the deal was on, I grabbed 12x 128GB m.2 SSD's at $28 a pop.

I'm going to be using 4x of those for SLOG/Cache attached to a ZFS array, the other 8 however are going to go to local storage for my VM host (built in NAS).

Build logs of course will be documented and uploaded as I'm in the process of buying all of this stuff still.
 
Last edited:
  • Like
Reactions: Biren78

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Why 4 as SLOG/ZIL cache a single 128 GB device will be more than ample to accomodate any write cache needed without throw 4 devices at it. Guess you could mirror the ZIL device. I think the verdict is still out on how much bang for buck this brings.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,641
2,058
113
Your going to need good cpu's to really push an ssd array. Also, now your talking speeds where you'd need 10Gb network to really benefit from the speed. Ssd's are nice, but until they match $/GB for spinning disks they don't make sense until the rest of your infrastructure can support/benefit from the speed of ssd's. Lastly, ssd's don't have an unlimited life span, so you still need a good backup plan. (Crashplan from code42 is awesome).
I would imagine 10gig land + 10gig for iSCSI, not much more $$ to get a dual 10gig card, and that's my plan at-least. I plan to post to get feedback in near future, but it seems sensible to keep lan traffic minimal but allow huge data transfers. Or, even bond the 10gig and stick in a 4x1Gig for LAN. Depends on your network I guess :)