Video Editing NAS for 5 editors - FreeNas/TrueNas/Unraid

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MaxWo

New Member
Feb 8, 2021
2
0
1
Hey guys! :D
First of all, thank you very much for all the good information and help here.
I built my first Unraid server 4 years ago. At that time it should be used as a video editing NAS, Plex Server and for other applications. That has now changed. Shortly afterwards I founded a small video production company and we had 3 editors using it at as a NAS only. (RIP my Plex Libary )

Upgrading to 10Gbe Nics + Switch helped a lot and everything worked surprisingly fine. But with higher bitrate and resoltion footage + adding a 4th editor, I think we hit the limits of Unraid, mainly because of a missing read cache. With high bitrate footage, we now get poor performance while editing. All Editors using Adobe Premiere Pro.


So I looked back into FreeNas / TrueNas and OpenZFS, read a lot of forum posts, watched videos and tried to do my resarch. STH, L1T, LTT, 45Drives ... found greate information everywhere but I am now a bit overwhelmed and would ask for your opinion. What was your experience with ZFS? What is important for video editing?

Briefly to me. I feel more at home in a GUI. Have already built many PCs, exited for enterprise stuff but not many practical experience with it. The new server should co-exist to the currently used Unraid Server.

I came up with the following system

Mainboard: Supermicro X11SPI-TF
CPU: Intel Xeon Silver Prozessor 4208
RAM: 96 GB (6x16GB) DDR4 2400 ECC REG
FAN: Noctua NH-U14S
CSE: Inter-Tech IPC 4U-4410
PSU: 800W (have one lying around)

2000GB Patriot Viper VPN100 M.2 2280 PCIe 3.0 x4 NVMe 1.3 3D-NAND TLC

5 x 10Tb Toshiba Enterprise Capacity MG06ACA10TE 256MB SATA 6Gb/s

My plan was to use the NVMe as L2ARC and create one vdev RaidZ-1, giving me around 38Tb of usable storage. With the option to add a second vdev with another 5 drives in the future.

This configuration costs approximately $ 3850 (3200€ in Germany).

Am I missing something? Would you recommend something different from your experience?
Is it worth it to use 32GB Ram Sticks and double the RAM to 192GB?

Thank you for all your input and would be happy to share my new server, the construction process, the performance test and my experiences with you :D
 

Spartacus

Well-Known Member
May 27, 2019
788
328
63
Austin, TX
So one of the biggest differences is how freenas uses cache, unraid has 2 tiers SSD for the fast and HDD for the slow.
Freenas has 3 tiers, it uses ram as the first tier of cache, ssd as second and hdd as third.
So if you get enough ram you dont need a ssd cache at all really depending on your daily usage.

For the sake of data safety unless you have a backup of your data (and even if you don't) I highly recommend at minimum getting a 6th drive and run raidz-2. If the data is business critical a z3 might even be warranted (though if you have 6 drives you may as well mirror it at that point).
After an initial drive failure, upon rebuild during the high usage a second drive has a much higher likelihood of failure.


That all said, lets step back, if your editors have been good with unraid its not likely unraid itself but how it works.
Since you can only get the speed of one drive at a time thats generally your bottleneck (assuming you dont have a unraid cache) so its not so much adding editors but the increased resolution and bitrate thats maxing out individual hard drives.
Which means if you have 3-4 people maxing out those individual drives doing a ZFS raid will only get you so much additional performance since they will be using the array collectively instead of thier own individual drives.
So unless your editors run solely off the ram/ssd cache they're likely to run into similar limitations quickly as there will still be 4 users working on a shared storage i/o at the same time.

So in that thought aspect an additional 1-2 drives to give you improved performance might be warranted and perhaps go with 8 tb drives instead to lower the cost somewhat while still meeting your space needs (I would recommend a 4+2 or 8+2 for raidZ2 as that will optimize the performance of the array)
 
  • Like
Reactions: MaxWo

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
to Add to Spartacus points, I'd rather suggest going with smaller HDs, like 6TB but instead of 5-6 to go with 12 drives. 2x RaidZ2 of 6 drives.
6 drives in Raidz2 is what iXsystems recommends doing. I have my big doubts that any caching for video editing could be of any significance.
What you need high-speed sequential reads and writes. We recently build two systems in order to saturate 10gig sequential writes. We used 36 drives in 6x Raid2Z of 6 drives.
 
  • Like
Reactions: MaxWo

gea

Well-Known Member
Dec 31, 2010
3,161
1,195
113
DE
You mainly need to understand that readcache on ZFS is mainly ram (arc) that can be extended with a slower ssd/NVMe (l2arc). As ZFS readcache does not cache sequential data it is mainlly there to improve access to small random io on a read last/read most base. In your case it mainly improves metadata access. With 96GB RAM you will see no improvement on video data, does not matter if you double RAM or add an nvme as an l2arc. I would skip the nvme and start with a 6 disk (3 x 2 disk) mirror that you can extend with more mirrors.

What you need is a fast pool with good iops values. Many smaller but fast disks in a multiple raid-10 is what I would prefer to offer 10g performance to several users. Maybe the multihreaded SMB service on Solarish offers a better performance than SAMBA. Fastest SMB in my tests was Oracle Solaris with native ZFS but OmniOS (Solaris fork, a european effort) is nearly as fast and offers also the multithreaded SMB server and OpenZFS, OmniOS Community Edition

I have done performance tests, maybe you can see the principles. An AMD Epyc system may offer an additional performance boost over a Xeon in same price range.

see napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Manual
especially pdf about principles, amd epyc vs xeon, l2arc tests with several pool layouts and smb tests.

Grüße

Gea
 
Last edited:
  • Like
Reactions: MaxWo

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
You mainly need to understand that readcache on ZFS is mainly ram (arc) that can be extended with a slower ssd/NVMe (l2arc). As ZFS readcache does not cache sequential data it is mainlly there to improve access to small random io on a read last/read most base. In your case it mainly improves metadata access. With 96GB RAM you will see no improvement on video data, does not matter if you double RAM or add an nvme as an l2arc. I would skip the nvme and start with a 6 disk (3 x 2 disk) mirror that you can extend with more mirrors.

What you need is a fast pool with good iops values. Many smaller but fast disks in a multiple raid-10 is what I would prefer to offer 10g performance to several users. Maybe the multihreaded SMB service on Solarish offers a better performance than SAMBA. Fastest SMB in my tests was Oracle Solaris with native ZFS but OmniOS (Solaris fork, a european effort) is nearly as fast and offers also the multithreaded SMB server and OpenZFS, OmniOS Community Edition

I have done performance tests, maybe you can see the principles. An AMD Epyc system may offer an additional performance boost over a Xeon in same price range.

see napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Manual
especially pdf about principles, amd epyc vs xeon, l2arc tests with several pool layouts and smb tests.

Grüße

Gea
Gea thank you for the detailed explanation, which mirrors my thoughts on using cache with large video files.
to make it even more snappy, I'd suggest going with 5x 2 disk mirrors.
 

MaxWo

New Member
Feb 8, 2021
2
0
1
Thank you guys. Your answers really helped me after reading so much about ZFS.
You are right, raidZ2 is the least for security. As an external backup, I am already uploading into the cloud.

@gea OmniOS looks fantastic. I'm currently reading through all of the PDFs. They are worth gold! Thank you

I think my next step will be to buy hard drives. Either way, I'll need it. Then I take the time and build a test system from spare parts that I have. And then I just start testing with your recommended pool configurations.

Thanks for your time. As soon as I can test and have the first results, I'll post them here. Maybe it will help someone with similar needs ;)