High Capacity Storage Server for 10Gb Network

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I currently have two 64TB (80TB RAW) UnRAID storage servers on my network. My UnRAID data drives are 8TB Seagate SMR drives which have been incredibley reliable over the past few years (never lost a drive) but they obviously major limitations with regard to stripped disk pooling solutions. UnRAID has fit my needs for years but now with my continuing to add more and more services and with my network upgrade to 10Gb last year I'm looking for better overall performance. With that in mind, I'm looking at alternative software solutions to UnRAID and in turn, recommendations for data drives to go along with that new solution.

The data I store on my current servers (mirrored) can be summarized as follows:

70% Media
10% Backups
10% Surveillance Data
10% Personal Data

I've recently built a FreeNAS Corral storage server consisting of all SSD's to act as the shared VM datastore for my ESXi cluster. It's been working out quite well I'm happy with it. So FreeNAS (and thus ZFS) is an option I'm considering but certainly open to other alternatives.

As for the data drives, this is where I really need some help. I don't have a ton of cash to drop right now on new drives so for the time being I'll probably just be upgrading one of my UnRAID servers. Even so, I'm still looking for the best bang for my buck.

Would love some suggestions/recommendations.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
If you're after performance, the SMR drives are going to be a problem. My first thought is to keep them on UnRAID since you know they work well enough there, using them for the big media storage which is what they are good at. And, generally speaking, you don't need as much performance for anyway.

For performance, wide stripe width is key. If you have plenty of CPU/RAM, ZFS can perform very well. It's rarely the fastest on any given hardware, but can do quite well. You don't need tons of resources, but people try to run it on little ARM/Atom machines with 2G RAM and complain about performance....

I've been very happy with my 10x Mirror array. It's mostly 2TB 7200 drives and can keep up with 10Gb most of the time. Some of the small random I/O stuff can bog it down, but overall it works better than I expected for spinners. If it really bothered me, I'd look into SSD caching, but it's good enough for my needs right now. Server spec is pretty low end for this place, 2x E5506, 98GB RAM, 3x H310 HBAs and a Mellanox CX2 for 10GbE. I have a CPU upgrade on the way, but that's more for Plex transcoding and general VM use. As a filer, it's more than enough.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
If you're after performance, the SMR drives are going to be a problem. My first thought is to keep them on UnRAID since you know they work well enough there, using them for the big media storage which is what they are good at. And, generally speaking, you don't need as much performance for anyway.

For performance, wide stripe width is key. If you have plenty of CPU/RAM, ZFS can perform very well. It's rarely the fastest on any given hardware, but can do quite well. You don't need tons of resources, but people try to run it on little ARM/Atom machines with 2G RAM and complain about performance....

I've been very happy with my 10x Mirror array. It's mostly 2TB 7200 drives and can keep up with 10Gb most of the time. Some of the small random I/O stuff can bog it down, but overall it works better than I expected for spinners. If it really bothered me, I'd look into SSD caching, but it's good enough for my needs right now. Server spec is pretty low end for this place, 2x E5506, 98GB RAM, 3x H310 HBAs and a Mellanox CX2 for 10GbE. I have a CPU upgrade on the way, but that's more for Plex transcoding and general VM use. As a filer, it's more than enough.
Thanks for the feedback.

One main point to be made is I don't want to split up my media storage off from the rest of my storage. I really do prefer to consolidate it all in to one solution. I'm aware the SMR drives are going to have to go which is why I can probably only afford to upgrade one of the two servers at this time. Mirroring is probably not an option for me since I need high capacity (at least 48TB usable, preferably 64TB+) and won't be able to afford 12-16 8TB drives. So I'm thinking RAIDz2/3 with SSD caching.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
I thought that might be the situation. Caching can help a lot, but at some point you have to hit the rust, no way around that. If you set up a L2ARC and enable sequential reads to use it, that can offload the media access to it. That prevents those from causing seeking which helps. Then either disable sync writes, which is probably how UnRAID is set up by default anyway, or get a decent SLOG to handle those. Though it might make sense to try with them on, and watch the stats to see if it's even an issue.

Before you go all-in, if you can, test the SMRs in a filled up raidz. Don't forget to do a replace to simulate a drive failure. With 8TB SMRs, I expect that's going to take a long time, but I'm guessing as I've never used an SMR drive. And keep in mind that raidz performance is about equal to a single drive, though sequential reads can do better.

Without SMR, using 2 6-drive raidz2 in a pool I couldn't get close to 10Gb. Max throughput tended to run about 300MB/sec for sequential reads. Using 2TB drives, rebuild times for drive replacement ran about 12 hours on arrays about 50% full.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
On the data storage side I think your cheapest option if your wanting to stick with the 8TB sizes is the WD 16TB My Book Duos. Drives only have a 2 year warranty on them vs 3 years on a bare drive. I personally sent mine back due to over half of them having vibration issues. But if you catch them in the WD store and use the 20% off Plex coupon code they would be $400 or $200/drive. You could also sell the enclosure on eBay to lower your cost per drive by $20.

I know cash it tight so you could always do two drives at a time, sell a few archive drives, rinse and repeat.

I'm currently waiting for the 10TB Red to come out. The 10TB Purple was announced the other day so I have to assume the Red isn't far behind.


Sent from my iPhone using Tapatalk
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I think somebody the other day has a qty of 10tb HGST He10's for sale at ~$425 so that's probably the kind of price it would be... expensive disks for home but very good for any kind of raid.

It's kind of sad that there is not a local file system open source choice that splits blocks into 1mb chunks and throws them around that then scales to multi node as well. Ceph kind of does it but that's not anything like what the commercial vendors have (IBM XIV as example)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
I thought that might be the situation. Caching can help a lot, but at some point you have to hit the rust, no way around that. If you set up a L2ARC and enable sequential reads to use it, that can offload the media access to it. That prevents those from causing seeking which helps. Then either disable sync writes, which is probably how UnRAID is set up by default anyway, or get a decent SLOG to handle those. Though it might make sense to try with them on, and watch the stats to see if it's even an issue.

Before you go all-in, if you can, test the SMRs in a filled up raidz. Don't forget to do a replace to simulate a drive failure. With 8TB SMRs, I expect that's going to take a long time, but I'm guessing as I've never used an SMR drive. And keep in mind that raidz performance is about equal to a single drive, though sequential reads can do better.

Without SMR, using 2 6-drive raidz2 in a pool I couldn't get close to 10Gb. Max throughput tended to run about 300MB/sec for sequential reads. Using 2TB drives, rebuild times for drive replacement ran about 12 hours on arrays about 50% full.
Yea I'm not looking to max out my 10Gb connections using spinners as that's just not realistic given my budget. But I know I'm losing out on performance by not striping my data. My main hesitation for going striped over the past few years is the cost. I either have to go with striped mirrors (RAID10) where I lose a ton of usable space when we're talking 8-10TB drives, or I have to go RAIDz2/3 where the rebuild times are probably insane. So that's still a dilemma for me.


On the data storage side I think your cheapest option if your wanting to stick with the 8TB sizes is the WD 16TB My Book Duos. Drives only have a 2 year warranty on them vs 3 years on a bare drive. I personally sent mine back due to over half of them having vibration issues. But if you catch them in the WD store and use the 20% off Plex coupon code they would be $400 or $200/drive. You could also sell the enclosure on eBay to lower your cost per drive by $20.

I know cash it tight so you could always do two drives at a time, sell a few archive drives, rinse and repeat.

I'm currently waiting for the 10TB Red to come out. The 10TB Purple was announced the other day so I have to assume the Red isn't far behind.


Sent from my iPhone using Tapatalk
Thanks for the suggestion. I've considered the My Book Duo option in the past but I prefer 3 year warranties MINIMUM on my drives. 5 years would be great but the Red Pros are considerably more of course.


I think somebody the other day has a qty of 10tb HGST He10's for sale at ~$425 so that's probably the kind of price it would be... expensive disks for home but very good for any kind of raid.

It's kind of sad that there is not a local file system open source choice that splits blocks into 1mb chunks and throws them around that then scales to multi node as well. Ceph kind of does it but that's not anything like what the commercial vendors have (IBM XIV as example)
Yea I remember seeing that listing for the 10TB HGST's. The thought of having to buy 8 of those just to get to the raw capacity I have now in my servers is deflating though :(. That doesn't even give me the 64TB usable I have now. And furthermore as I mentioned in my other responses, if I decided to go striped RAID, I can only image how long those rebuild times would be.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
Thanks for the suggestion. I've considered the My Book Duo option in the past but I prefer 3 year warranties MINIMUM on my drives. 5 years would be great but the Red Pros are considerably more of course.
The only other option I can think of is the 10TB Ironwolf which has been trending at $369 from both Amazon and Newegg. Newegg has also been having the 10TB Seagate Ent drive for $400 a lot lately after their coupon code. $31 more than the Ironwolf but it would get you the 5 year warranty you would like.

I've thought about the Seagates myself but I'm changing chassis to 12 bay 2U and putting in slow spinning quiet fans and in doing so, I need 5,400 rpm drives to run cooler.


Sent from my iPhone using Tapatalk
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
It's more complicated to set up, no easy UI stuff like FreeNAS... But it seems like SnapRAID might be a good choice for your SMR drives. No stripes and such. Of course, it's basically open source UnRAID, so I guess it's about where you are now. There have been some discussions in the Linux forum here about using SSD cache in front of it for performance. It looks to be all CLI setup, so that's good or bad depending on how comfortable you are with that. I don't think it will be much different in performance to the rust than UnRAID, but the cache layer might give you want you want.

Picking stuff can be a pain, good/fast/cheap, pick 2. :)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
The only other option I can think of is the 10TB Ironwolf which has been trending at $369 from both Amazon and Newegg. Newegg has also been having the 10TB Seagate Ent drive for $400 a lot lately after their coupon code. $31 more than the Ironwolf but it would get you the 5 year warranty you would like.

I've thought about the Seagates myself but I'm changing chassis to 12 bay 2U and putting in slow spinning quiet fans and in doing so, I need 5,400 rpm drives to run cooler.


Sent from my iPhone using Tapatalk
Yea I've looked into the Ironwolfs but I've heard they are really loud (vibration issues).
It's more complicated to set up, no easy UI stuff like FreeNAS... But it seems like SnapRAID might be a good choice for your SMR drives. No stripes and such. Of course, it's basically open source UnRAID, so I guess it's about where you are now. There have been some discussions in the Linux forum here about using SSD cache in front of it for performance. It looks to be all CLI setup, so that's good or bad depending on how comfortable you are with that. I don't think it will be much different in performance to the rust than UnRAID, but the cache layer might give you want you want.

Picking stuff can be a pain, good/fast/cheap, pick 2. :)
I've been watching SnapRAID/MergerFS closely over the past few months indeed. The caching layer is the last piece I'd be looking for but it doesn't appear to be there yet. But yes, the performance would be very similar to what I have now which is why I'm exploring other options.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
Yea I've looked into the Ironwolfs but I've heard they are really loud (vibration issues).
That exactly why I'm not buying the 8TB Reds and sent back what I bought. I had the 16TB Duos but just over half the drives had vibration issues. Some were silky smooth and others were junk. 99% of people would have used them and said the vibration is normal with that many platters. My thoughts were if half of them felt like they were not even powered on, then the vibration of the other half is NOT normal. Unfortunately I didn't get any enclosures that had two smooth drives. It was always one good, one bad. So I had to send the entire enclosure back, I couldn't mix and match and keep only the good ones. I figured if I wanted 20 good drives I would need to buy them as bare drives and order 50, returning 30 of them. And that just seems silly.

My opinion is they are garbage and I won't be buying more of them. I will however try a couple of the 10TB Reds when they come out and see if they are any better. Or continue buying 4TB drives since I know they run perfectly smooth n silent, at least the ones that I've received.



Sent from my iPhone using Tapatalk
 

Mirabis

Member
Mar 18, 2016
113
6
18
29
Shameless plug:.... Do you plan on selling those 8TB SMR's?


Sent from my iPhone using Tapatalk
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
That exactly why I'm not buying the 8TB Reds and sent back what I bought. I had the 16TB Duos but just over half the drives had vibration issues. Some were silky smooth and others were junk. 99% of people would have used them and said the vibration is normal with that many platters. My thoughts were if half of them felt like they were not even powered on, then the vibration of the other half is NOT normal. Unfortunately I didn't get any enclosures that had two smooth drives. It was always one good, one bad. So I had to send the entire enclosure back, I couldn't mix and match and keep only the good ones. I figured if I wanted 20 good drives I would need to buy them as bare drives and order 50, returning 30 of them. And that just seems silly.

My opinion is they are garbage and I won't be buying more of them. I will however try a couple of the 10TB Reds when they come out and see if they are any better. Or continue buying 4TB drives since I know they run perfectly smooth n silent, at least the ones that I've received.
So you had vibration issues with the 8TB WD Reds even though they are 5400RPM? That is disappointing to hear for sure.
 

MvL

Member
Jan 7, 2011
33
0
6
Netherlands
Interesting topic!

I'm in a similar situation. Running unRAID at the moment and also have 7 Seagate SMR drives. I'm not sure yet what to change of my setup. I tested Proxmox, ESXi, OpenMediaVault, RockSTOR and FreeNAS a bit. I must admit I like Docker containers. Proxmox use LXC containers and I'm not sure of ESXi uses containers with the new 6.5 version.

OpenMediaVault has a snapRAID plugin. Maybe you can test that and still use your SMR drives. I think SMR drives are good drives to store your media.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
So you had vibration issues with the 8TB WD Reds even though they are 5400RPM? That is disappointing to hear for sure.
Yes. I bought (10) of the 16TB WD MyBook Duos so a total of (20) 8TB Reds. IIRC 13 of them had vibration issues and 7 of them ran very smooth where you could only tell they were running while seeking/reading/writing. I had experienced the same vibration issues, some good some bad, when they were first released.

So the best I can say is that their QC on balanced drives is complete crap. They may have ran strong for 5 years or 5 months. Had they gone bad I know WD would just send me returns that someone else sent in for a defect. Thanks but no thanks.

Had I been running a single drive it probably wouldn't matter but when you start loading 20-40 of these in a chassis and 13/26 of them have vibration issues that issue in itself is going to be magnified. It was just a recipe for disaster. I hated it because I wanted to replace my 100+ 2TB drives and drastically simplify my setup.

I even shot from the hip thinking that if I left them running long enough and get warm that the bearing would seat a little better and the vibrations would smooth out. I have some 8TB externals, so the white label EZZX model, that have the same issue. So it's not WD "sticker color" specific, it's the entire line. And since these are basically slower RPM HGST He drives with different firmware, maybe that vibration goes away at 7,200 RPM.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Interesting topic!

I'm in a similar situation. Running unRAID at the moment and also have 7 Seagate SMR drives. I'm not sure yet what to change of my setup. I tested Proxmox, ESXi, OpenMediaVault, RockSTOR and FreeNAS a bit. I must admit I like Docker containers. Proxmox use LXC containers and I'm not sure of ESXi uses containers with the new 6.5 version.

OpenMediaVault has a snapRAID plugin. Maybe you can test that and still use your SMR drives. I think SMR drives are good drives to store your media.
SMRs are good for strictly media but as my use case has evolved (Surviellance, Backups, etc) as has my network (10Gb now) I'm after better performance.


Yes. I bought (10) of the 16TB WD MyBook Duos so a total of (20) 8TB Reds. IIRC 13 of them had vibration issues and 7 of them ran very smooth where you could only tell they were running while seeking/reading/writing. I had experienced the same vibration issues, some good some bad, when they were first released.

So the best I can say is that their QC on balanced drives is complete crap. They may have ran strong for 5 years or 5 months. Had they gone bad I know WD would just send me returns that someone else sent in for a defect. Thanks but no thanks.

Had I been running a single drive it probably wouldn't matter but when you start loading 20-40 of these in a chassis and 13/26 of them have vibration issues that issue in itself is going to be magnified. It was just a recipe for disaster. I hated it because I wanted to replace my 100+ 2TB drives and drastically simplify my setup.

I even shot from the hip thinking that if I left them running long enough and get warm that the bearing would seat a little better and the vibrations would smooth out. I have some 8TB externals, so the white label EZZX model, that have the same issue. So it's not WD "sticker color" specific, it's the entire line. And since these are basically slower RPM HGST He drives with different firmware, maybe that vibration goes away at 7,200 RPM.
Yea that's not what I want to hear. This kind of feedback has me leaning more towards 8 drive RAIDz2 vdevs using the 6TB drives. This way I can build a single 36TB usable vdev and add a second shortly after as I recoup cash from selling off my SMR's.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
Yea that's not what I want to hear. This kind of feedback has me leaning more towards 8 drive RAIDz2 vdevs using the 6TB drives. This way I can build a single 36TB usable vdev and add a second shortly after as I recoup cash from selling off my SMR's.
Others that I have chatted with say the 6TB Reds are just as quiet and vibration free as my 2TB & 4TB drives. I'm hoping the vibration issue has been dealt with in their 10TB drive lineup. Now that the 10TB Purple is out, I'm waiting for the Red to be spotted in the wild.

I'm not against Seagate as far as being better or worse than WD, I'm just still a little Butthurt with losing (30) of the 3TB of doom drives as well as (5) 500GB drives around the same time. Losing well over $3,000 worth of drives even though that was a problematic drive line, still doesn't sit well with me. So I'm hesitant by rewarding them with $7,000 with of business for completely railroading me in the past. I can forgive, but I never forget!!


Sent from my iPhone using Tapatalk
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Not to question your heart felt beliefs, but given that you have a minimum of three servers already, why are you opposed to leaving them alone and moving your other data?

unraid on SMR drives is fine for media and probably backups. Why not add a few drives to your FreeNAS box for your performance sensitive data?
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
Not to question your heart felt beliefs, but given that you have a minimum of three servers already, why are you opposed to leaving them alone and moving your other data?

unraid on SMR drives is fine for media and probably backups. Why not add a few drives to your FreeNAS box for your performance sensitive data?

I don't presume to speak for the OP, but I can see an ease of use argument for keeping everything in one pool/server. And power for only one server, should that be an issue. It does also create more breakage should that one place break though, so as always there are tradeoffs. The SMR media array could be problematic with enough clients trying to read it.