Need help: Budget UnRaid server with Core or Ryzen, NVMe...

Techspin

Coffee enthusiast
I made a power hungry Media Editing NAS build. I've dipped my toes into server-land and picked myself up an old EVGA Classified SR-2 with dual X5650's. After setting up UnRaid, 4TB parity, a 11x drive array totalling 32.5TB, was measuring power draw and saw 401w at power on, 280w booting up, and 255w array running for a minute. I know this is partially due to the ancient Silverstone SST-ST1500 which still pulls 4.5w on standby(!) but we're in Taiwan and summers here, power bills can get expensive really quickly, so I'll need to upgrade ASAP before summer, new hardware will likely pay for itself over 4-5 months. Searched articles and forums on here for months actually doing research, but what's the cheapest motherboard using Intel Core/AMD Ryzen with NVMe that's easy to find that you all would recommend? EDIT: This server needs to be really quiet, noise isn't an option.

Currently we're using a LSI SAS HBA Fujitsu D2607-A21 pre-flashed into IT mode, and will eventually upgrade our 1Gbe network with either an Asus 10Gbe RJ45 or a dual 40 gigabit adapter card from Mellanox, if I can ever figure out optical wiring. In research I saw both Steve from GN use a ASRock Rack X470D4U2-2T with Ryzen 5 3600, and read STH's piece on the $400 ASRock Rack X570D4U-2L2T which has 10Gbe RJ45 but doesn't have mini-SAS? And Linus used a ASUS ROG Strix B450-F Gaming + 3600 setup, but that board is getting hard to find. Any cheap ~$500 or so combo either Intel or AMD that's available now that you'd recommend? So far I found these, any of these in anyone's top list?
Asus Prime z490-A / Gigabyte z490 Vision G -both 2.5Gbe, Gigabyte has dual Type-C
Asus Rog Strix z590-A: 2.5Gbe dual Type-C
Gigabyte z590 Vision D - 2.5Gbe w/ Wifi6, dual Thunderbolt 4... or Vision G / MSI z590 Gaming Carbon Wifi?

One consideration as since this is a Media Editing NAS, we need to get 4K footage onto the server, whether by network/Type-C or thunderbolt, not sure what'll work best yet. So, there's probably an option I should be considering, just feel like I'm missing something. Thanks in advance!
 
Last edited:

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,372
489
83
For anything on this board remotely server-related, you're going to have a bunch of people like me say "you need something with ECC an IPMI!". Many AMD "desktop" motherboards support unbuffered ECC but it's almost unheard of for intel - you'd need to go to the server ranges for that. My desire for ECC and a relatively high core count was the prime driver for me wanting to go for an AMD system.

I'm very happy with my X470D4U, 3700X and ECC RAM myself*... but "media editing" is quite a broad concept. Some people do all their transcoding on the CPU, others use the fixed-function encoders on graphics card to do the same thing. I'm using an SFP+ 10Gb network myself with an add-in X710 controller and a venerable M1015 IT-mode for IO. The X570 iteration of the board is basically the same deal but with PCIe 4 (and I think somewhat higher idle power usage as a result of the X570 chipset but I don't have concrete proof of this).

The economics of small-scale servers has likely changed a fair amount since I did my pre-pandemic build (intel prices on low-end xeon's >4 cores have been dropped I believe) but if power is a big consideration it depends very largely on what you're using the server for; if it's just for serving files then you don't need a whole hill of beans for processing power (I wanted a reasonably powerful server myself as I do a fair amount of CPU transcoding on it). But you say "media editing" so I'm not sure if you're actually doing processing on the server or just using it for files.

Incidentally, the X56xx platform tended towards very power hungry, especially in >1 socket configurations. Whatever you're doing with the box, anything modern will likely be a considerable improvement.

Edit: I should add that, whilst I'm using dual M2 drives for the OS, the X470D4U doesn't actually run them at PCIe 3.0 x4 (one is PCIe 3.0 x2, the other is PCIe 2.0 x4 due to CPU and chipset limitations on the ryzen platform); I believe the X570 model allows for PCIe 4.0 x4 for both M2 slots if you need the bandwidth (but I doubt you do if you've been using gigabit ethernet until now).

* Details on my build are in this thread if you're interested, including some CPU and power consumption benchmarks.
 
  • Like
Reactions: T_Minus

kapone

Well-Known Member
May 23, 2015
1,006
572
113
I'm confused...

You say "unraid server" in one breath and "media EDITING NAS" in another...

Is this a server or a workstation?

p.s. EVGA Classified SR-2...is not even close to "server land". You picked one of the most power hungry (consumer level) boards out there, and a platform that is also one of the most power hungry. Looking at pictures of your build, you have ONE teeny tiny HBA among that sea of PCIe slots. If that storage card is all you wanted to add-on, you could have used a ITX board that has a PCIe slot, and it'd consume 1/10th of the power the platform you chose.

Edit: To give you an idea of power consumption and platforms. My DIY SAN uses Supermicro X9SRL-F boards with 64GB RAM and an e5-1620 v2 CPU in Chenbro 48 bay server cases (I have two identical systems). The Supermicro board/CPU/64GB RAM/one SSD/one Adaptec 16 port RAID card/one Mellanox CX3 dual 40gbe network card/6x 120mm fans (at ~25% PWM)...all uses less than 60w when idle.
 
  • Like
Reactions: T_Minus and itronin

Techspin

Coffee enthusiast
@EffrafaxOfWug Great info here, I'll add in that a real server should have IPMI and ECC ram. I like the idea of AMD allowing ECC, I know it's recommended for ZFS but non-ECC may be just fine for UnRaid's default XFS? Obviously ECC is better, but if we don't have access...
Yeah, it will just be a file server, no transcoding in the near future. I have no Linux ability hah. Thanks for the X56xx power info too, yeah knew about the PCIe 2.0 limitation for X470... we work with MSI sometimes, and do some minimal overclocking guides. Looking at X470D4U, B550D4M, X570D4U..... that X570D4U-2L2T is amazing. Expensive too... trying to use 'older' or sub-$150 mobo +3600 if possible.

@kapone Thanks, this confirmed my suspicions. The build is supposed to be a Media Editing NAS, for use with a few Premiere editors working on their own stations (eventually), the UnRaid would serve data to them. Key for us will be NVMe cache, and next year upgrade-ability using a second Fujitsu D2607-A21 perhaps, and something like your Mellanox 40Gbe. I learned what a SAN is now, cheers.

Follow up question: For an offline weekly NAS backup solution, for a cheap solution would you use multiple external HDDs and push updates as needed? It's ghetto but we use Allway Sync which has been super reliable over years but it's a Windoze program. I'm just learning UnRaid, heard you can use rsync to setup something but if it's Linux I'm screwed unless there's a super clear walkthough- more of a "what hardware to use" question rather than software question.

Thanks, very helpful community here! -Rick
 

zer0sum

Active Member
Mar 8, 2013
455
164
43
Are you sure UnRAID is the right tool for the job?
It's an amazing NAS with incredible flexibility but it's not a speed demon if that's what you really need.

Check out SpaceInvaders video's for tons of really detailed and clear walkthroughs - https://www.youtube.com/channel/UCZDfnUn74N0WeAPvMqTOrtA

I just rebuilt my UnRAID server and was trying to keep to a low budget and this is what I ended up with :)

Super Micro X11SSM-F - $114 (Amazon open box)
Xeon E3-1270v5 , 4 core / 8 thread, 3.6-4Ghz - $125 (Ebay)
Mellanox ConnectX-3 - $20 (Ebay)

That gives you a relatively speedy little box that has IPMI and can take 64G of ECC memory, and has a dual port NIC that can do 10/40/56 GbE :D

I filled the other 3 x PCIe slots with a M1015 SAS card ($70), an Nvidia P400 ($80) for hardware transcoding, and an AOC-SLG3-2M2 ($45) for 2 x nvme drives

Total cost is ~$454 without memory or nvme drives
 
Last edited:

Techspin

Coffee enthusiast
I just rebuilt my UnRAID server and was trying to keep to a low budget and this is what I ended up with :)

Super Micro X11SSM-F - $114 (Amazon open box)
Xeon E3-1270v5 - $125 (Ebay)
Mellanox ConnectX-3 - $20 (Ebay)

That gives you a relatively speedy little box that has IPMI and can take 64G of ECC memory, and has a dual port NIC that can do 10/40/56 GbE :D

I filled the other 3 x PCIe slots with a M1015 SAS card ($70), an Nvidia P400 ($80) for hardware transcoding, and an AOC-SLG3-2M2 ($45) for 2 x nvme drives

Total cost is ~$454 without memory or nvme drives
I'm zero with Linux, so unless there's another option I just need a NAS with a GUI... UnRaid seems to fit the bill?

Great finds, and cheap too. Since I'm in Taiwan, power bills are high here, may I ask roughly watts at idle? It suddenly turned into a large concern for me since summer is coming. That Mellanox ConnectX-3 looks great too. I'll be back here when I try my hand at 40Gbe haha
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,006
572
113
I'll say this and then shut up. :)

For a "media editing NAS" i.e. a network storage that will be used for media editing by (potentially) multiple people, UnRaid is the absolutely wrong choice.

Now I'll keep my mouth shut.
 
  • Wow
Reactions: Techspin

Techspin

Coffee enthusiast
@kapone Open mind here! I'm not set on an UnRaid build, just came to that conclusion on research that's all. We'd want the drives to idle to save power which will be a big factor summertime, and likely have to shut down the server nightly also :p Please, if you have a better option I'd love to hear it, although as mentioned my Linux skills are abysmal. I did see Steve at GN and Wendell from L1Tech setup UnRaid for them, and since I have minimal time to tinker... anything similarly easy? Always appreciate the input and learning!
 

kapone

Well-Known Member
May 23, 2015
1,006
572
113
My philosophy (not that it's unusual...) is KISS (I'm sure you know what that means) :)

What's wrong with Windows, a simple hardware raid card and a raid array??

It gives you the speed/throughput, a GUI, allows disk spin down, parity etc etc.
 

josh

Active Member
Oct 21, 2013
514
136
43
For anything on this board remotely server-related, you're going to have a bunch of people like me say "you need something with ECC an IPMI!". Many AMD "desktop" motherboards support unbuffered ECC but it's almost unheard of for intel - you'd need to go to the server ranges for that. My desire for ECC and a relatively high core count was the prime driver for me wanting to go for an AMD system.

I'm very happy with my X470D4U, 3700X and ECC RAM myself*... but "media editing" is quite a broad concept. Some people do all their transcoding on the CPU, others use the fixed-function encoders on graphics card to do the same thing. I'm using an SFP+ 10Gb network myself with an add-in X710 controller and a venerable M1015 IT-mode for IO. The X570 iteration of the board is basically the same deal but with PCIe 4 (and I think somewhat higher idle power usage as a result of the X570 chipset but I don't have concrete proof of this).

The economics of small-scale servers has likely changed a fair amount since I did my pre-pandemic build (intel prices on low-end xeon's >4 cores have been dropped I believe) but if power is a big consideration it depends very largely on what you're using the server for; if it's just for serving files then you don't need a whole hill of beans for processing power (I wanted a reasonably powerful server myself as I do a fair amount of CPU transcoding on it). But you say "media editing" so I'm not sure if you're actually doing processing on the server or just using it for files.

Incidentally, the X56xx platform tended towards very power hungry, especially in >1 socket configurations. Whatever you're doing with the box, anything modern will likely be a considerable improvement.

Edit: I should add that, whilst I'm using dual M2 drives for the OS, the X470D4U doesn't actually run them at PCIe 3.0 x4 (one is PCIe 3.0 x2, the other is PCIe 2.0 x4 due to CPU and chipset limitations on the ryzen platform); I believe the X570 model allows for PCIe 4.0 x4 for both M2 slots if you need the bandwidth (but I doubt you do if you've been using gigabit ethernet until now).

* Details on my build are in this thread if you're interested, including some CPU and power consumption benchmarks.
I've been tempted to go the AMD route for some time but the lack of PCIe slots for the motherboard just frustrates me. Need to stick 2x GPUs on the board (only need 4.0 x8 each) and preferably an additonal x4 slot but all the "server" boards are all mATX for some dumb reason.
 
  • Like
Reactions: Techspin

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,372
489
83
I've been tempted to go the AMD route for some time but the lack of PCIe slots for the motherboard just frustrates me. Need to stick 2x GPUs on the board (only need 4.0 x8 each) and preferably an additonal x4 slot but all the "server" boards are all mATX for some dumb reason.
ASRR do a couple of ATX B550 boards but sadly only with two PCIe slots, so two x8 cards would be feasible but the extra x4 wouldn't. Similarly, the mATX X570 boards would also allow for two x8 cards if it was going in an ATX case (assuming dual-slot GPUs) but again you'd be maxed out on expansion.

With its roots as a consumer platform, one of ryzen's shortcomings was the lack of PCIe lanes (they cost power and extra motherboard complexity). I'd hoped the embedded Epyc 3000 series would have got a little more traction (as they've got a lot more lanes available and support registered ECC as well) but sadly they've received little attention compared to the socketed chips.

The ryzen kit should be thought of as "server-lite" rather than a proper server though; the Epyc 7000's are where that's at and they've got more IO than you can shake a stick at. They're certainly far pricier than the ryzen's though.
 
  • Like
Reactions: Techspin

josh

Active Member
Oct 21, 2013
514
136
43
ASRR do a couple of ATX B550 boards but sadly only with two PCIe slots, so two x8 cards would be feasible but the extra x4 wouldn't. Similarly, the mATX X570 boards would also allow for two x8 cards if it was going in an ATX case (assuming dual-slot GPUs) but again you'd be maxed out on expansion.

With its roots as a consumer platform, one of ryzen's shortcomings was the lack of PCIe lanes (they cost power and extra motherboard complexity). I'd hoped the embedded Epyc 3000 series would have got a little more traction (as they've got a lot more lanes available and support registered ECC as well) but sadly they've received little attention compared to the socketed chips.

The ryzen kit should be thought of as "server-lite" rather than a proper server though; the Epyc 7000's are where that's at and they've got more IO than you can shake a stick at. They're certainly far pricier than the ryzen's though.
Yea I've looked at all the "server" boards and they're only by ASRR for some reason. Ryzen 5000 should have 20x PCIe 4.0 lanes after interconnect so 8 + 8 + 4 shouldn't be a huge ask. Looks like I have to stick with my dual 2011s for a few more years.

The X570 Pro WS has everything I need but lacks IPMI. Hopefully something similar pops up soon.
 
Last edited:
  • Like
Reactions: Techspin

josh

Active Member
Oct 21, 2013
514
136
43
Yeah noticed the lack of PCIe on all Ryzen boards vs Intel... so anyone looking at a z490/z590 then for an upgrade path? Or something like a B560 Aorus Pro AX that has 3? I think 3 is the minimum for needs and one slot future expansion.
No ECC on Intels. Out of the picture completely.
 
  • Sad
Reactions: Techspin

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,372
489
83
Yea I've looked at all the "server" boards and they're only by ASRR for some reason. Ryzen 5000 should have 20x PCIe 4.0 lanes after interconnect so 8 + 8 + 4 shouldn't be a huge ask.
ASRock have frequently had quirky design choices; ryzen as a server-lite platform is something only they've run with as far as I can tell.

I believe the 20 PCIe 4.0 lanes from the CPU is always split in to a x16 for the GPU and a x4 for the NVME, with other peripherals hanging from the chipset.
 
  • Like
Reactions: Techspin

josh

Active Member
Oct 21, 2013
514
136
43
ASRock have frequently had quirky design choices; ryzen as a server-lite platform is something only they've run with as far as I can tell.

I believe the 20 PCIe 4.0 lanes from the CPU is always split in to a x16 for the GPU and a x4 for the NVME, with other peripherals hanging from the chipset.
Yea the thing is the GPU doesn't really need 4.0 running at x16. So it doesn't make any sense to opt for that instead of x8/x8. At least Asus had the common sense to take a 3rd x8 from the chipset for a 3rd card.

I know desktop users prefer to have direct NVMe slots but for the prosumer/server audience giving a x8 or x16 slot and letting them decide what to use the lanes for isn't too much to ask for.
 
  • Like
Reactions: Techspin

nivedita

Member
Dec 9, 2020
41
21
8
The X570 Pro WS has everything I need but lacks IPMI. Hopefully something similar pops up soon.
The X570 Pro WS appears to support out-of-band management via ASUS Control Center Express. Anyone know how that compares to IPMI experience/whether regular IPMI tools actually work with it?
 
  • Like
Reactions: Techspin

zer0sum

Active Member
Mar 8, 2013
455
164
43
The X570 Pro WS appears to support out-of-band management via ASUS Control Center Express. Anyone know how that compares to IPMI experience/whether regular IPMI tools actually work with it?
I haven't used it personally, but I was interested in buying it and after researching the consensus was that it absolutely sucked!
I think this sums it up - " An IPMI with a terrible Windows only NodeJS interface and bare bone features. Likes to disconnect while running, and shuts down with the rest of the computer, preventing you from remotely turning it back on. "

Have you guys looked at the Tyan S8020 at all?
  • AMD Socket TR4 for 1900X, 1950X or 2990WX
  • 12" x 9.6" ATX form factor
  • X399 chipset w/ 8 DIMM slots
  • 2 x 1000Base-T LAN w/ shared IPMI port, or 2 x 10GbE
  • 4 x PCIe Gen.3 x16 slots (1 in x8 link)
  • 1 x PCIe Gen.2 x8 slot (w/ x4 link)
  • 7.1 channel HD audio
  • AST2500 BMC with IPMI v2.0 support
 

Attachments

Last edited:
  • Wow
Reactions: Techspin

Alex0220

New Member
Feb 13, 2021
24
3
3
If you want to go with Intel(Please don't kill me Intel haters) then the ASUS P11C-I is perfect. That is basically the cheapest board with ECC Support for the Intel Xeon E-2100 and E-2200 series. It has also got IPMI via the ASMB9 module. It is Mini-ITX so I hope that this is not a problem. P11C-I | Server & Workstation | ASUS Deutschland

You could also use a Supermicro X11SCA-F
 
  • Like
Reactions: Techspin