Overkill (and over budget) Video Editing NAS/Server for a Startup, help!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Milko

New Member
Feb 18, 2023
6
7
3
Innsbruck
Hi everyone,
total n00b here, trying to build a NAS/Server for our freshly founded Video Production startup.

Our projects are still small and sparse, I was working off of a mirrored Windows Storage Pool and storing completed projects to a Drobo 5D.
The Drobo is slowly giving up the ghost and after the last false alarm when I lost access to active projects due to a 3-day rebuild.
I thought we need a solid infrastructure backbone - a server and a NAS.

Here's what I ended up with...well, will end up with. Most of the bits haven't even arrived, yet.

I guess the following is every-noob's-NAS-building-journey, but I'll share anyway...

I looked at some off-the-shelf solutions, QNAP this, Synology that, but coming from one such system I was reluctant to jump into another.
Even if things have changed in the last 10 years since I got the Drobo, and for majority of the time, it did it's job.
I found them to be expensive for what they are, to require even more expensive drives due to small number of bays and to be a pain when expansion is needed.

So, I thought I'd build one, I had heard about TrueNAS, read up a bit on hardware, looked at what other people have build and made a plan.
A decent second hand server board in a nice and quiet Define R5 with 12 HDDs and a few SSDs, a 10Gbe NIC and job done.

Being a 3D/VFX artist by trade I was somewhat familiar with workstations, MP motherboards and the existence of ECC memory and server/render racks (so kneadable...), I secretly wanted one, but thought it'll be way out of budget, but an X10 board, perhaps with a built in 10Gbe NIC...why not.

oneskinnydave over at the TrueNAS forum had posted about a rig for the same purpose (though on way higher level), and I was lucky he replied in detail to my questions.
I was still somewhat set on the Define R5, especially since I actually got one 2nd hand some working hardware included.
But as I kept reading and learning, being inspired by Dave's and others' rigs, I thought "I don't want a platform from 2014...who does!?".
Naturally, X12 was out of the question, so X11 was a good "compromise". I found an X11SPI-T on eBay and started reading about the details.
SAS exists, HBAs are a thing, there are other cables than SATA to connect drives! The 6134 sounds like a decent processor I had never heard of.

I bought RAM from a person on another platform, 6x32GB DDR4 2666 which would fit the X11SPI-T I didn't have, yet.
This person turned out to be NablaSquaredG, who casually asked - "which CPU are you planning to use with the memory?"
By which time the board I wanted was gone, and the next option was too expensive. I stated looking for alternatives and found that X11DPI-NT is also a pretty cool board. Also, I found out that I could run it with one CPU. So, I found one and bought it before it disappeared as well, it was a really good price.
By the way, did you know, that MP boards, when ran with one CPU have I/O and other limitations, oh, you did!?
Now I had to find another CPU model...and ideally get two of them. Another sleepless night gone to waste.

Then NablaSquaredG kindly noticed that my newly selected and purchased board, ain't gonna fit in the case I got and suggested I looked into a proper server chassis,
oh and also, the MB has a problematic revision.
Saving me from myself, yet again. Now I had to scramble to mend my mistakes before it's too late.
NablaSquaredG also pointed out that there is a person on this forum who's selling the board I liked, and with some fancy 8175 CPUs, two of them!
In my tunnel vision I quickly relieved tesla100 from his goods after a few messages.
Later on I messed up the chassis variant, apparently WIO is a thing as well, so I had to beg, yet another unassuming seller to hold and exchange goods I purchased before they get shipped out.

So, now, 3 days and nights of little to no sleep later and 1000+ euros lighter, I got this beast of a server, and all I actually needed was an external HDD with some backup :D
It'll sure come handy, but it's totally overkill, for our 2-4 person crew.
Though I first need to go through the misery of setting and testing everything, of which I have as much clue as of the hardware part.

Build’s Name: Milko's Nightmare Assuring Server
Operating System/ Storage Platform: Proxmox/TrueNAS
CPU: 2x Intel Xeon 8175M
Motherboard: Supermicro X11DPI-NT
Chassis: SuperChassis 826BE1C-R920LPB (minus the 2.5" hot-swap and the "SQ" on the PSU)
Drives: 4x 10 TB WD RED, 4x 6 TB WD RED, 4x 4TB WD RED, all SATA and 5400
RAM: 192 GB ( 6x 32GB DDR4-2666 RDIMM REG ECC Kingston)
Add-in Cards: HBA TBD
Power Supply: 2x 920W PWS-920P-1R
Other Bits: 1x M.2 2TB NVMe SSD (Cache perhaps), 2x 240GB Kingston A400 SSD (Boot drives)

Usage Profile: I'm planning for 2-4 people to be able to do video editing directly off the NAS (ProRes 4K, 2-12GB files on average) in Davinci Resolve,
and Backup completed projects.

Other information: Now that we have all this compute resource, would be nice to be able to spin up some VMs with Proxmox.
Actually, perhaps even virtualize TrueNAS. Get Pi-hole and a few other bits setup...remote access perhaps.

What I need your help with:
- HBA: The chassis comes with a BPN-SAS826A, and the MB, while without and LSI HBA, already has support for 12+2 drives.
Considering how many drives I already have, 12x HDD plus a few SSDs, what would be the best way to connect things up?
If I understood correctly, I could connect the MB directly to the Backplane via SFF to SFF cable?
Actually, what's the correct cable to connect the HBA to the A-version Backplane?
This way I'd get the board/chassis fully populated, which probably isn't a good idea.
Perhaps 2x 8-drive HBAs is a better solution, then keep the MB slots for the SSD boot drives?
Are those 50$ LSI 9207-8i 6 Gbs SAS 2308 on eBay any good? Does the LSI 9200-8e sound more reliable?

- Backplane: The BPN-SAS826A is a pass-through board, does that mean that it does support 2TB+ drives?

- Expansion: There won't be space for more drives unless I swap with larger drives, is that a good idea?
There's plenty of PCI-e slots, even with 2x HBA on the 8x', could I run an Expander to another JBOD case?
Is there are world where I don't need MB/Backplane in the JBOD chassis?

- SATA DOM: The two SATA connectors on the MB are SATA DOM enabled, and have power pins.
Could I still run SSDs on them? The boot drives, that would be ideal. Is that a good idea?

- Cache: I got one M2 drive (need another one). Mostly because it sounded perfect for video/vfx cache.
Is it a good idea to keep SLOG / L2ARC (I'm still learning what exactly those do what they do, haven't got to there just yet)?

- Cooling: This tiny 2U box has no space and no options.
The SNK-P0068APS4 seems to the the only option. Unless it isn't?
With active coolers - shroud or no shroud?

- NIC: I'd like to have 10Gb direct connection to my WS for now and a 10Gb connection to a future switch.
The mothership has Dual 10Gbe, so that should be achievable?
We probably won't be saturating 2.5Gbe with those rusty drives, but perpetual upgrade is something I'd like to avoid as much as possible.
It would be great to have at least 2.5Gbe cabled for laptop users, but also WiFi 5/6.
I suppose I need a router, connected to the server in some way with the antenna out in the office.
Then pass through the connection from Proxmox to the NAS and have some users and groups setup.
This already sounds scary unsafe and unsanitary...what would be a better way?

Well, as you can see, I've got everything figured out, it's smooth sailing from now on :D
For real now, I would greatly appreciate if you could help me find answers to those questions!
I got myself pretty deep into this, somehow expecting it to be a bit more straightforward while the Drobo clock is ticking.

Huge thanks to @NablaSquaredG, @tesla100 and oneskinnydave, who despite my best efforts, have put me back on the right track!
 
Last edited:

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,345
820
113
Considering how many drives I already have, 12x HDD plus a few SSDs, what would be the best way to connect things up?
If I understood correctly, I could connect the MB directly to the Backplane via SFF to SFF cable?
yes, absolutely. The X11DPi-NT has 3x SFF-8087 ports, each providing 4x SATA. The backplane also has 3x SFF-8087 ports and is a direct attach backplane (so there's no SAS expander between the ports and the drives - If it was an EL backplane with expander, this wouldn't work!).

You can just grab some standard SFF8087 to SFF8087 cables (you should be able to find them for a couple of euros on ebay or amazon, maybe the chassis will already have some included)

Actually, what's the correct cable to connect the HBA to the A-version Backplane?
That depends on your HBA. If you get an SAS-2 HBA (which I wouldn't recommend), you most likely need SFF-8087 to SFF-8087 cables.
If you get an SAS-3 HBA, you most likely need SFF-8643 (HBA) to SFF-8087 (backplane) cables. There are some Dell OEM SAS-3 controllers which use SFF-8087 for SAS-3, but this is a rare exception. SAS-3 is usually SFF-8643 and SAS-2 SFF-8087


This way I'd get the board/chassis fully populated, which probably isn't a good idea.
Why not?

Perhaps 2x 8-drive HBAs is a better solution, then keep the MB slots for the SSD boot drives?
If you only have SATA drives, you can save yourself the money and just connect all 12 (3x4) SATA Ports to the backplane and use the remaining 2 SATADOM ports (Orange) for the system drives - SATADOM is backwards compatible to normal SATA.

Are those 50$ LSI 9207-8i 6 Gbs SAS 2308 on eBay any good?
I have personally stopped buying SAS-2 controllers, I recommend Fujitsu CP400i or Supermicro S3008L-L8e SAS-3 Controllers. I've got 3 S3008L-L8e from China incoming, I'll let you all know whether they're good or not.

Does the LSI 9200-8e sound more reliable?
8e means that the ports are facing outwards, so you can connect external disk shelfs. Probably not what you need.

- Backplane: The BPN-SAS826A is a pass-through board, does that mean that it does support 2TB+ drives?
Yes, it does. The 2TB limitation was only on old SAS-1 controllers (3.0Gbit/s) with SATA drives.

- SATA DOM: The two SATA connectors on the MB are SATA DOM enabled, and have power pins.
Could I still run SSDs on them? The boot drives, that would be ideal. Is that a good idea?
Sure, like i recommended, you can just run the system SSDs on those ports. Just keep in mind that you somehow need to power your SSDs (except if you buy SATADOM modules which would get their power from the ports)

- Cache: I got one M2 drive (need another one). Mostly because it sounded perfect for video/vfx cache.
Is it a good idea to keep SLOG / L2ARC (I'm still learning what exactly those do what they do, haven't got to there just yet)?
Depends - Many different opinions. I don't think you'll need L2ARC right now, as you've probably got enough RAM for your use case. You shouldn't use a consumer SSD for SLOG, as they won't give you great performance. Enterprise SSD with Power Loss Protection or Optane is the way to go.
SLOG could help with performance, as you've got an HDD pool.

- Cooling: This tiny 2U box has no space and no options.
The SNK-P0068APS4 seems to the the only option. Unless it isn't?
With active coolers - shroud or no shroud?
yeah, SNK-P0068PS is good. I personally have started using shrouds (independent of whether I'm using active or passive heatsinks), to make sure that beefy network cards like Mellanox ConnectX-4 2x100G get enough airflow and don't overheat.
If you have a shroud, you could also go passive with SNK-P0068PS - Whether active or passive is better is something I can't answer for sure. It's been on my list to compare that for a long time.

- NIC: I'd like to have 10Gb direct connection to my WS for now and a 10Gb connection to a future switch.
The mothership has Dual 10Gbe, so that should be achievable?
Yes, it has 10GBase-T copper. 10GBase-T is a bit of a cumbersome standard, as switches are quite expensive, etc...
If you find one that fits your needs and has 10GBase-T, that's great.

If you want fiber 10G, you can buy a Mellanox ConnectX-3 - They're quite cheap, max 50€, very reliable and have 2x 10G Fiber
 

Milko

New Member
Feb 18, 2023
6
7
3
Innsbruck
This is amazing, thank you for the in-depth explanations!

- I'll go with 3x SFF-8087 cables for now and no HBA to save a bit of cash.
- Boot SSDs would got to SATADOM as advised, and I already have a Molex to SATA for power.
- Also, no L2ARC for now either, until we need it and until there's budget for SSDs with PLP.
- I got the active coolers, just in case, in the Narrow variant.
- Network, I will direct connect my WS to the NAS for now using the built in NICs,
but will switch things out to Mellanox 10G fiber once we have a switch.

So, that's a lot of boxes ticked!


I don't have a very reasonable answer, it's just the inflexibility - I won't easily be able to plug another drive in if I need to.


What would be a good switch, TL-SG3210XHP-M2 this sounds good on paper. 8-Port 2.5GBASE-T and 2-Port 10GE SFP+
Is the convenience of running both BASE-T in RJ45, and SFP a downside, should I go with full SFP like TL-SX3008F + transceivers?
It is likely that some people will bring their workstation and occasionally we'll have freelancers, and RJ45 is ubiquitous.
Are there any Go-To devices/brands in this area?

So much more to learn...
 
Last edited: