High Capacity Storage Server for 10Gb Network

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Others that I have chatted with say the 6TB Reds are just as quiet and vibration free as my 2TB & 4TB drives. I'm hoping the vibration issue has been dealt with in their 10TB drive lineup. Now that the 10TB Purple is out, I'm waiting for the Red to be spotted in the wild.

I'm not against Seagate as far as being better or worse than WD, I'm just still a little Butthurt with losing (30) of the 3TB of doom drives as well as (5) 500GB drives around the same time. Losing well over $3,000 worth of drives even though that was a problematic drive line, still doesn't sit well with me. So I'm hesitant by rewarding them with $7,000 with of business for completely railroading me in the past. I can forgive, but I never forget!!
I'm interested to see how the 10TB Reds do as well. Hopefully they drop soon.


Not to question your heart felt beliefs, but given that you have a minimum of three servers already, why are you opposed to leaving them alone and moving your other data?

unraid on SMR drives is fine for media and probably backups. Why not add a few drives to your FreeNAS box for your performance sensitive data?
A few reasons:

1. I'm losing patience with UnRAID (both servers). I've been dealing with some issues where my user shares will disappear at which point my server is rendered frozen and useless until a hard reset. No one on the UnRAID forums seems to have any insight on the issue. None of the other non-striped RAID setups appear up to par with regard to all the features (cache pool, decent WebUI, etc.) I'm used to being an UnRAID user for the past 4-5 years.

2. To simply things. While I still love to tinker with my home network there are certain constants I'd like to just set and forget. My media streaming needs is a big one as I not only use Plex on 2-3 clients at a time locally, but also have 6-10 remote streams active each night. I'm also now just adding surveillance cameras into my home and don't want those streams disturbed. The more I can do to keep these services uninterrupted the better.

3. I've invested a good deal of money into making my network and all servers 10Gb in the past year. I'd like to reap some of those benefits by not having my storage lagging so far behind. Will a zpool saturate my network? Most likely not. But if I can even get half the performance (dual RAIDz2 or RADI10 vdevs) of the line speed I' be quite happy.
 
  • Like
Reactions: rubylaser

JDM

Member
Jun 25, 2016
44
22
8
33
3. I've invested a good deal of money into making my network and all servers 10Gb in the past year. I'd like to reap some of those benefits by not having my storage lagging so far behind. Will a zpool saturate my network? Most likely not. But if I can even get half the performance (dual RAIDz2 or RADI10 vdevs) of the line speed I' be quite happy.
Back a year and a half ago, I actually did testing with 10 - 8TB SMR drives and ZFS for a cold storage project at work (project showed good promise, was approved, and has now been running well in operation for over a year). While I was using ZFS on Linux and not Freenas, I was able to get pretty much line speed out of a single RAIDZ2 of 8TB SMR drives but it required between 6-8 concurrent transfer threads to achieve. Obviously it wasn't sustaining that performance for hours on end with no interruption. But for many minutes on end it was able to reach those number. I believe this was in part due to two main factors:

1. How ZFS uses RAM to cache writes helps soften the blow to the SMR drives during data ingest. Caveat that the machines I was testing with had 128GB of RAM in them, but my tests were using datasets that were larger than 2TB.

2. The outer tracks on the drive being PMR instead of SMR. With a 10 drive raiz2 and roughly 30GB of PMR on each drive, you have close to 240GB of PMR "cache" before the drives starts having to write directly into SMR territory. Even after we crossed the threshold on our testing, the falloff on performance was gradual, not a cliff.

For the workload you mentioned, the only hesitation I would have would be the surveillance use case as I could see that not working well with SMR drives. For media,backups, and general file storage though, I like the way ZFS treats SMR drives and the performance you can get out of them is decent. In your situation maybe setting up a mirror of two PMR drives (Reds, Ironwolfs, something else) in a separate zpool for surveillance and keeping your current drives around for bulk storage could be a viable option, especially to keep cost down.

Since you have mirrored systems, maybe its worth cutting one of the systems over to Freenas and test it out. The initial data ingest won't be fun (guessing you have a good chunk of data on these arrays), but something you can't really get around. At the very least it will show you worst case as you progress through that huge transfer.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Back a year and a half ago, I actually did testing with 10 - 8TB SMR drives and ZFS for a cold storage project at work (project showed good promise, was approved, and has now been running well in operation for over a year). While I was using ZFS on Linux and not Freenas, I was able to get pretty much line speed out of a single RAIDZ2 of 8TB SMR drives but it required between 6-8 concurrent transfer threads to achieve. Obviously it wasn't sustaining that performance for hours on end with no interruption. But for many minutes on end it was able to reach those number. I believe this was in part due to two main factors:

1. How ZFS uses RAM to cache writes helps soften the blow to the SMR drives during data ingest. Caveat that the machines I was testing with had 128GB of RAM in them, but my tests were using datasets that were larger than 2TB.

2. The outer tracks on the drive being PMR instead of SMR. With a 10 drive raiz2 and roughly 30GB of PMR on each drive, you have close to 240GB of PMR "cache" before the drives starts having to write directly into SMR territory. Even after we crossed the threshold on our testing, the falloff on performance was gradual, not a cliff.

For the workload you mentioned, the only hesitation I would have would be the surveillance use case as I could see that not working well with SMR drives. For media,backups, and general file storage though, I like the way ZFS treats SMR drives and the performance you can get out of them is decent. In your situation maybe setting up a mirror of two PMR drives (Reds, Ironwolfs, something else) in a separate zpool for surveillance and keeping your current drives around for bulk storage could be a viable option, especially to keep cost down.

Since you have mirrored systems, maybe its worth cutting one of the systems over to Freenas and test it out. The initial data ingest won't be fun (guessing you have a good chunk of data on these arrays), but something you can't really get around. At the very least it will show you worst case as you progress through that huge transfer.
Thanks for the very detailed feedback, those are some interesting and promising results. When you said you achieved near line speed (with 6-8 concurrent transfers), that was 10Gb line speed? What speeds did you get from a single transfer?
 

JDM

Member
Jun 25, 2016
44
22
8
33
Thanks for the very detailed feedback, those are some interesting and promising results. When you said you achieved near line speed (with 6-8 concurrent transfers), that was 10Gb line speed? What speeds did you get from a single transfer?
Yes that was 10Gb line speed. A single thread ran between 200-250MB/s with a mixed file workload. I also did testing at this time with WD Red 6TB drives and saw similar performance numbers in terms of throughput as I scaled up the threads, but they were able to sustain the speeds longer for obvious reasons.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Yes that was 10Gb line speed. A single thread ran between 200-250MB/s with a mixed file workload. I also did testing at this time with WD Red 6TB drives and saw similar performance numbers in terms of throughput as I scaled up the threads, but they were able to sustain the speeds longer for obvious reasons.
Interesting. I may just have to setup a RAIDz2 vdev and test this before I explore buying all new drives. Thanks.
 

trapexit

New Member
Feb 18, 2016
17
7
3
New York, NY
github.com
Regarding a point earlier in the thread about SnapRaid + MergerFS and caching.

Not tested much yet but I've been working on a tool to help setup and use device mapper on Linux to setup block caches for slow drives. I've only been testing it in test VMs and still need to create a systemd service and probably init script which will properly call out to the tool at startup. If anyone is interested in fooling around with it please do and send me any feedback.

GitHub - trapexit/dmcache: a tool to help setup dm-cache for drives not controlled by LVM2
 

Churchill

Admiral
Jan 6, 2016
838
213
43
I'm interested to see how the 10TB Reds do as well. Hopefully they drop soon.




A few reasons:

1. I'm losing patience with UnRAID (both servers). I've been dealing with some issues where my user shares will disappear at which point my server is rendered frozen and useless until a hard reset. No one on the UnRAID forums seems to have any insight on the issue. None of the other non-striped RAID setups appear up to par with regard to all the features (cache pool, decent WebUI, etc.) I'm used to being an UnRAID user for the past 4-5 years.
AS a long time unrAID user have you emailed Tom directly from lime-tech? I'll assume you have the PRO version which gives you a bit of support and you can drop him a line and say "Look no one has any idea how to fix this, what can I do?" Tom's pretty good at responding back or one of the other lime-tech employees can help out.

I like unRAID (i've bounced between SnapRAID,Xpenology, FreeNAS, NAS4Free, etc.) but there are times that I'm wanting more 'support' for certain things that I just can't figure out and the community at large can't either.
 

McKajVah

Member
Nov 14, 2011
59
10
8
Norway
Have you considered BTRFS or ZFS with the latest Proxmox 5.0 beta?

I'm running a 10 disk BTRFS RAID10 setup at the moment from a SE3016 JBOD, and it's been running great lately. I'm also currently moving all my media-files over to a SnapRaid array. You can even include your BTRFS array in your SnapRaid array to get even more redundancy (I'm planning on doing this).

Great all in one solution with fantastic VM support and a Debian distro in the backend. The performance of container VM's (LXC) are just incredible.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
So you had vibration issues with the 8TB WD Reds even though they are 5400RPM? That is disappointing to hear for sure.
Quick update on the vibration issue. I went against my own refusal to buy more 8TB Reds and bought 6 more 16TB Duo's from the WD store with the 20% off Plex Pass code, so $400 for (2) 8TB Reds with the 2 year warranty. No BS shucking or praying WD won't deny a RMA needed.

To my surprise all (12) 8TB Reds are silky smooth and running quiet from this batch. So they are now mid way through a long smart test. Once they are done another 96TB will be reporting for duty. My 60 bay HGST chassis gobbled them right up and begged for more.

I also got a follow up email from the WD store saying thanks for your order blah blah blah and here is a 20% code to use in the next 30 days. Limit of 5 items. So I'll buy 5 more of the 16TB Duos when they come back in stock.


Sent from my iPhone using Tapatalk
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,140
594
113
New York City
www.glaver.org
To my surprise all (12) 8TB Reds are silky smooth and running quiet from this batch. So they are now mid way through a long smart test. Once they are done another 96TB will be reporting for duty. My 60 bay HGST chassis gobbled them right up and begged for more.
The WD Red spec is for chassis of 8 bays or less [reference]. Red Pro spec is for chassis of 16 bays or less [reference]. I even got some grief from WD about vibration running RE4 drives in a 16-bay chassis (they said the drives logged excessive vibration, even though the drives themselves were causing it). I'm now using HGST HUH728080AL4200 drives, which are from HGST (yes, I know they're part of WD, but they have separate manufacturing lines) and are SAS, so I don't have to deal with the "you're running consumer drives in a server chassis" BS from WD.

Back when I was testing my first few He8 drives, I had a RAIDZ1 with 5 of them and was getting 700MByte/sec single-threaded network write performance. The full chassis of 16 (5 * 5-drive RAIDZ1 + hot spare) is limited by the 10GbE speed when moving data out of the chassis.
 

CyberSkulls

Active Member
Apr 14, 2016
262
56
28
45
The WD Red spec is for chassis of 8 bays or less [reference]. Red Pro spec is for chassis of 16 bays or less [reference]. I even got some grief from WD about vibration running RE4 drives in a 16-bay chassis (they said the drives logged excessive vibration, even though the drives themselves were causing it). I'm now using HGST HUH728080AL4200 drives, which are from HGST (yes, I know they're part of WD, but they have separate manufacturing lines) and are SAS, so I don't have to deal with the "you're running consumer drives in a server chassis" BS from WD.

Back when I was testing my first few He8 drives, I had a RAIDZ1 with 5 of them and was getting 700MByte/sec single-threaded network write performance. The full chassis of 16 (5 * 5-drive RAIDZ1 + hot spare) is limited by the 10GbE speed when moving data out of the chassis.
When my chassis are actually running everything is spun down except the drive in use (I don't stripe data) so in effect it's a single bay chassis.

The vibration I was referring to is the individual drives themselves having excessive vibration. I'm talking about hook up a single drive, lay it down on top of your chassis and listen to the sweet sound of harmonics. It was really bad. This batch spins as smooth as my 2/4TB Reds. The last batch I got was just junk and a serious lack of quality control.

This isn't directed at you it's just in general but I don't care what WD says about designed for this or that and this many drives but no this many drives. It's such garbage BS, just about as bad as all these identical drives with different colored stickers on them being designed for this or that when they are the same damn drive right down to the part numbers on the PCB's. Firmware tweaks based on the rainbow color they put on that drive? Sure, I'll buy into that but I'm not buying the rest of their propaganda. I'm all stocked up on BS, they will have to sell it to someone else.

Edit: and that's insane that your chassis are running at speeds fast enough that your 10G network is the bottleneck! My heart bleeds for you :)

Sent from my iPhone using Tapatalk
 

RedneckBob

New Member
Dec 5, 2016
9
1
3
120
I'm not against Seagate as far as being better or worse than WD, I'm just still a little Butthurt with losing (30) of the 3TB of doom drives as well as (5) 500GB drives around the same time. Losing well over $3,000 worth of drives even though that was a problematic drive line, still doesn't sit well with me. So I'm hesitant by rewarding them with $7,000 with of business for completely railroading me in the past. I can forgive, but I never forget!!
Me too brother, me too. On occasion I open up my cabinet and stare at the big stack of failed drives and shake my fist in the air because I was within hours of losing a 8 drive RAID6 array. I then yell when remembering the Seagate replacement drives failed too.
 

chief_j

New Member
Jul 31, 2014
20
1
3
US
I have 8x8TB SMR's on corral in a raidz-2 config, and all of my media is shared with plex using nfs over a 10gb network. I've never seen a problem with read performance using plex with 8 concurrent streams occurring. Write are good as well ~300MB/s sequential file copy, and I think that's in part to the amount of ram I have on freenas too, but I think the cow mechanism helps with having to read a whole band and then write it back to get the track at the bottom.

This could be arbitrary as well but it 'felt' like once I enabled snapshots performance was much better. I keep two weeks worth of snaps, and they roll off at night. So if a band needs to be re-written to clean up removed data, it could all be occurring @ night. I didn't actually test this out, so I can't vouch for the gut feeling.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
SMR doesnt have a problem with read performance, its their write performance that is extremely sluggish, and resilvering will take many days. For me that isn't acceptable.
I'd be interested to see what the difference is in rebuilding an array from a failed drive with SMRs vs PMR drives.
 

maze

Active Member
Apr 27, 2013
576
100
43
I actually just did that - one of my SMR drives popped out of my 4disc raid5 array. Rebuild wasnt too bad. Was running at 110-150MB/s average.
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
Two questions being that I haven't worked much with FreeNAS in the past.

1. What's the best way to go about copying data (multiple TB) from my UnRAID server (disk/user shares?) to my new test FreeNAS server via NFS?

2. Once the data is copied, what's the easiest way to test a disk rebuild (as if a single disk failed) without having any additional disks other than what's currently in my RAIDz2?
 

maze

Active Member
Apr 27, 2013
576
100
43
1. maybe scp? - least overhead
2. pull disc, put in different pc and clean it out partition table, br etc.
 

ttabbal

Active Member
Mar 10, 2016
743
207
43
47
scp isn't bad, but has crypto overhead. rsync would work without that, and can be restarted if it gets stopped part way through the copy.

Given the topic, I'm assuming this is a LAN based copy. If it's going over the internet you want that crypto. :)
 

IamSpartacus

Well-Known Member
Mar 14, 2016
2,515
650
113
It's a LAN copy yes. I did a test with rsync last night of a 36GB Windows PC Backup and it was painfully slow (10MB/s). Will try with some large files (media) today and see what speeds I get.