Zeus V2 : U-NAS NSC-810A | X10SL7-F | E3-1265 V3

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

itronin

Well-Known Member
Nov 24, 2018
1,285
852
113
Denver, Colorado
Isn't it risky for a short circuit
It can be. If you left the standoff in place in the chassis and put the motherboard over the top of it then yes. short circuit is likely. Good practice to remove standoffs and only place those that fit the mounting holes to insure that you did not accidentally put the motherboard on top of a standoff without a corresponding motherboard mounting hole.

Depending on the motherboard flex, and the cards that will be inserted and corresponding insertion force you can always use a nylon standoff (cut-down as required) in the motherboard hole where a standoff position is not present in order to prevent the motherboard from flexing too much.

another good practice is to test the motherboard, cpu, and some (or all) of your memory outside of the case. Use carboard or equivalent non conducting surface underneath the motherboard.
 

Croontje

New Member
Dec 24, 2019
4
0
1
It can be. If you left the standoff in place in the chassis and put the motherboard over the top of it then yes. short circuit is likely. Good practice to remove standoffs and only place those that fit the mounting holes to insure that you did not accidentally put the motherboard on top of a standoff without a corresponding motherboard mounting hole.
Hi, that's why I asked. Because the standoffs in this case are not removable so I was wondering how other guys remove them. I was thinking about drilling it out but since no one mentions anything about removing maybe they didn't or they insulate ...
 

Croontje

New Member
Dec 24, 2019
4
0
1
Hi, that's why I asked. Because the standoffs in this case are not removable so I was wondering how other guys remove them. I was thinking about drilling it out but since no one mentions anything about removing maybe they didn't or they insulate ...
Just to let anyone with the same problem know, I used a drill to drill out the standoff. And then with some pliers removed the remains :)
 
Nov 17, 2020
24
1
3
Hello all - cannot believe I stumbled upon this forum after looking for people who may be experts in what I'm trying to accomplish :)

Per my username, I live in a tiny studio apartment. I have outgrown a 5-bay enclosure and would like to build a QuickSync capable server in the tiniest possible chassis to fit in my TV stand for my apartment, which is where the U-NAS NSC 810A comes into play. So I've embarked on building a system that suits my needs and I've pasted the basic parts list below. I had a few questions specifically for anyone who has built or is familiar with this case after reading some of the threads here:

  1. I have been trying to decide going back and forth between the Enhance ENP 7660B 600W, FSP Flex Guru 500W and the Seasonic SS-350M1U. I realized that the Seasonic isn't really "modular" per se (i.e. can't remove excess unused SATA cables or floppy disk cable), so why not go with the shorter 150mm FSP Flex Guru that has more power or go all out with the 7660B.
    • Am I correct to assume that the only cables from my PSU I'll need right now are: (1) 24pin to motherboard (2) 8 pin for CPU and (3) convert two molex from PSU into 4 molex using a splitter to wire the 4 total molex ports on the chassis backpane?
    • Is there any issues I should be thinking of with the non-modular nest of cables that come with the FSP? It is a 4cm shorter PSU so I thought that may help with the unused cables in that set which seem like a lot (1x 8 pin / 2x 6+2 / 2x daisy SATAs / 1x floppy won't be used I think)
    • Is there any bracket issues by using a "Flex" PSU that is 150mm vs. the recommended 190mm Seasonic SS-350M1U which is billed as "1U"? I was reading a few reviews on a similar 300W PSU on Newegg that seemed to imply there is a difference in bracketing between "Flex" and "1U" PSUs
  2. I want a mobo that supports at least 8 SATA connections, ECC RAM and Intel QuickSync (IPMI and 10GBe are a bonus). That left me with deciding between the Gigabyte W480M Vision W and the AsRock Rack W480D4U
    • My BIG concern with the Gigabyte W480M is that the 8 SATA ports are right on the edge of the mobo and I want to make sure there is some sort of clearance to actually plug in SATA cables (whether L-shaped or straight). Any help here would be greatly appreciated before I waste $200 on a useless mobo. See below picture of what I'm trying to solve for (bottom two pictures is someone else's build)
    • SATA Clearance with NSC810A.jpg
  3. Since this will be in my studio, running quiet is almost as important as size to me. Are there any specific upgrades or parts you'd focus on to maintain a really quiet server? I was thinking of upgrading the stock fans to Noctua NF12s and getting the 7660B with Noctua A4 20mm, not sure if there's anything else to do to make it silent proof
  4. Does my below build accommodate 2 PCI expansion cards later in the future? I was thinking a GPU and a 10GBe card (not sure what else people use their server PCI slots for, any creativity / ideas would also be helpful)
    • I believe the Seasonic won't allow me to add a GPU so I may strike that off my list for just that reason
  5. Am I missing any other basic components (cables, extenders etc.) from the below list? I had read negative feedback re: "standard power cables won't reach" and "front panel cables don't reach mobo" so I wasn't sure how exactly to know if my build below has those issues or which specific cables I need if it does have that issue
    • Does anyone know exactly how long power cables I would need from my PSU to "reach everything"?
  6. What are some first time tips you'd give for building in this chassis given the tight space?
CaseUNAS NSC-810A
MotherboardGigabyte W480M Vision W (LGA 1200)
CPUIntel Xeon 1.9GHz W1290T (10C/20T)
CPU CoolerNoctua NH-L9i
RAM16GB DDR4 ECC SDRAM
GPUnot needed right now with Intel QuickSync
PSUEnhance 7660B 600W (Platinum) or FSP Flex GURU 500W (Gold)
HDDs8x 16TB WD Easystore
M.2 Drive #11TB NVMe 4.0 M.2 SSD (Samsung 980 Pro)
M.2 Drive #21TB NVMe 4.0 M.2 SSD (Sabrent Rocket)
HBA CardNot needed with selected mobo
O/SUnraid Plus
OtherNoctua NF-S12A PWM Fans (2x)
OtherNoctua NF-A4x10 (PSU Fan Replacement)
OtherPCI-E Riser Cable (10GBe card in future)
OtherMiscellaneous Cables / Extenders?

@K D
@jingram
@Churchill
@PigLover
@IamSpartacus
 
Last edited:

Ixian

Member
Oct 26, 2018
88
16
8
You'll never get the angled edge sata connectors to work with the 810a, there's no clearance there. I'd rule out the Gigabyte MB anyway as it is a "creator" model (I guess that is what they call workstation boards nowadays) and lacks IPMI/remote. I'll never do a server without it, personally, even if it is sitting right next to me. It has been too useful too many times and really makes headless setups feasible.

The ASRock board is a decent choice though the SATA connections might still be tight. It has an Aspeed 2500 and supports HTML5 remote if I am not mistaken (the older BMC/remote combos which are java-console based are a pain in the ass to manage).

Here's the other thing to consider though - why the 810a? If it's for hot swap than here's my take on it: You don't need it. Hot swap in a small NAS home environment is a nice to have, in my opinion, not a requirement. And it's not worth making a lot of other sacrifices for, such as super-tight installations (the 810a is a bit of a nightmare to work in), odd and expensive power supply restrictions (you can't easily use a 1u with it, you need that oddball "flex" format that is hard to find - you'll also need to factor in what you'll go through if it fails), and so on.

Hot swap in a typical home lab is only useful for the convenience factor. Outside of a environment like a data center or well built IT office IT space, where there is redundancy built throughout (switches, cabling, power, etc.), ease of maintenance for large numbers of drives is important and high availability is an absolute must because of the cost of downtime there's no other real benefit to hot swap.

Also consider how often you'd actually hot swap a drive in your environment - ideally (and statistically) it is a rare occurance. I've done all of 3 drive replacements in the last 6 or so years with my home setup. All in all, you can undoubtedly survive an extra 15 minutes of downtime or so in a home environment if you need to shut down to replace a drive, particularly if you can schedule it for some off or weekend hour. What you really need, then, is a case where swapping out drives is simple, even if it isn't hot swap.

That's why I like the Node 804: It's a compact form factor (not much bigger than the 810a), much easier to work in, uses standard ATX power supplies, has fantastic cooling support (3x 120mm and 1x 120-140mm fans blowing front to back) and easily fits 8 3.5 drives + 2 2.5 drives. the 3.5 drives hang down in easy to remove racks - label them with the last 5 digits of their serial # for easy identification and they are very easy to swap in and out.

You can fit two more 3.5 (or 2.5) drives on the other side of the divided chamber, under the motherboard, though 3.5 drives will make it a bit cramped. If you are in to 3D Printing there are several accessories you can print for it - custom drive mounts to add 1 or 2 more drives behind the power supply, custom blanking plates with logo to replace the side window, additions to make drives tool-less (no screw mounting) and so on.

I switched to 2 of them two years for my primary and backup servers since I was space constrained and couldn't use a 4u rack solution. Can't recommend them highly enough. Maintenance has been easy, the lack of hot swap a non issue. 2 or 3 times a year I clean off the dust filters in the front and that's about it.
 
Nov 17, 2020
24
1
3
I considered the 804 but it simple is too big for me (13.5" x 12.1" x 15.3" vs. the 810A is 12.4" x 10.8" x 7.8"). If I had the space, I probably wouldn't have messed around with the SFF but alas I can't move out of my apartment so I need something that fits with my restrictions. I don't think there is any case that comes anywhere near as small for at least 8 bays + mATX as the U-NAS chassis.

I haven't ever used IPMI and wasn't sure how much I would need that feature (but certainly is a good feature of the Asrock W480D4U). I'm currently facing a big issue with my existing server (Windows), it crashed in a city 1000 miles away so I have no way to restart it currently. Is that what IPMI solves for? I thought having unRAID running would somewhat solve for any "container / docker / windows VM" crashes, and I would also have a Amazon Alexa plug to remote shut on / off for future uses (i.e. force a reboot on power loss).

That is really unfortunate on the Gigabyte...I really like that motherboard given the 2.5GBe ethernet. The ASRock W480D4U seems to not have any audio ports as well, and I don't want to buy a motherboard that has to attach a separate PCI audio card since i'll likely also have this hooked up to my TV.
 

Ixian

Member
Oct 26, 2018
88
16
8
I considered the 804 but it simple is too big for me (13.5" x 12.1" x 15.3" vs. the 810A is 12.4" x 10.8" x 7.8"). If I had the space, I probably wouldn't have messed around with the SFF but alas I can't move out of my apartment so I need something that fits with my restrictions. I don't think there is any case that comes anywhere near as small for at least 8 bays + mATX as the U-NAS chassis.

I haven't ever used IPMI and wasn't sure how much I would need that feature (but certainly is a good feature of the Asrock W480D4U). I'm currently facing a big issue with my existing server (Windows), it crashed in a city 1000 miles away so I have no way to restart it currently. Is that what IPMI solves for? I thought having unRAID running would somewhat solve for any "container / docker / windows VM" crashes, and I would also have a Amazon Alexa plug to remote shut on / off for future uses (i.e. force a reboot on power loss).

That is really unfortunate on the Gigabyte...I really like that motherboard given the 2.5GBe ethernet. The ASRock W480D4U seems to not have any audio ports as well, and I don't want to buy a motherboard that has to attach a separate PCI audio card since i'll likely also have this hooked up to my TV.
Remote access when all else fails is certainly one of the things IPMI can do for you, though it depends on why the server crashed - if it's a hung Windows or other OS problem than yes, assuming you had IPMI set up correctly you could log in and see it what's going on, reset power, possibly even fix the problem via diagnostics - it gives you a remote terminal in to the system. If the system doesn't have power or hardware failed obviously there's not much to be done.

Why do you care about onboard audio if you are hooking it up to a TV? You'd want to send audio over HDMI, usually, unless you have some unusual requirement or we're talking about a really old TV.

Finally, I hear you on the size, though again I recommend looking at the overall tradeoff picture. You have to make quite a few concessions in your build with that smaller case + it costs more (the case and power supply each are more than an 804 and ATX PS by a hundreds of dollars), and the 804 isn't that much bigger (biggest difference is depth) - maybe there's somewhere else you can fit it? Worth thinking about anyway.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
512
113
I haven't ever used IPMI and wasn't sure how much I would need that feature (but certainly is a good feature of the Asrock W480D4U). I'm currently facing a big issue with my existing server (Windows), it crashed in a city 1000 miles away so I have no way to restart it currently. Is that what IPMI solves for?
I'd consider IPMI as one of those things I never thought was worth the substantial extra costs involved, now I consider it essential - even for the little server sitting two feet away from me, it's so much more convenient to log in to the IPMI interface to keep an eye on bootup or rejiggle the BIOS than schlepping around with a monitor and a VGA cable.

In the case of your remote server, IPMI alone isn't the only solution - it's way too big of a security hole to leave exposed directly to the internet, so you'll also want to put it behind a VPN or similar if you don't have one already.

I used to use the mITX version of this case, the U-NAS NSC-800, as my main server (due to living in tiny London house-shares) but I've now moved to the InWin MS08 mini-tower simply because it's still pretty small for what you can fit inside it, and is so much easier to work in and keep cooled than the U-NAS case. Probably still too big for your purposes but thought I'd mention it just in case.
 

Ixian

Member
Oct 26, 2018
88
16
8
Also - you mentioned leveraging Quicksync, I assume for transcoding (Plex, etc.)? If you do that via the P630 onboard the Xeon you won't be able to also use it for display purposes (at least not if you are passing it through to a container via Unraid/etc.)

You asked about adding PCI expansion to the 810a - you can cram maybe one single slot card in via a riser cable (you'll need to get one), I've seen some builds that managed two slots but it is super tight and you've also got heat to consider.

A smaller single slot GPU might fit in, but I'd worry about heat. You could fit a single or better yet low profile 10GBe card in but if it's 10Gbase-T (copper) those run extremely hot as well, and the 810a doesn't have much in the way of airflow for the mainboard (that single 70mm side fan doesn't do much). And both? Forget about it.

Not to sound like a broken record here but none of the above is a problem with the Node 804 :) I have a Quadro P2200, an Intel 10Gbase-T card, and an Intel 905 PCI SSD in one of mine. I used to have a full-sized GeForce 1070GT from an old gaming build in one I used for hardware transcoding until I upgraded. I know you want to fit this in your TV stand but with everything you want to do it is worth looking at putting it outside the stand, maybe right next to it?
 
Nov 17, 2020
24
1
3
Why do you care about onboard audio if you are hooking it up to a TV? You'd want to send audio over HDMI, usually, unless you have some unusual requirement or we're talking about a really old TV.

Finally, I hear you on the size, though again I recommend looking at the overall tradeoff picture. You have to make quite a few concessions in your build with that smaller case + it costs more (the case and power supply each are more than an 804 and ATX PS by a hundreds of dollars), and the 804 isn't that much bigger (biggest difference is depth) - maybe there's somewhere else you can fit it? Worth thinking about anyway.
Ah I wasn't aware that onboard audio is totally irrelevant if using HDMI (yes will be connected via HDMI to AV receiver which then connects to my 4K TV), that solves my only other issue with the Asrock board away from needing a separate PCI 10G card. I agree it will be expensive between needing to pay $300 for the Asrock W480D4U, $215 for the 7660B and $250 for the chassis itself, but luckily I haven't spent any money on myself this year so I'm treating it as a birthday gift / christmas gift / you worked hard during COVID so buy something nice for yourself.

To be frank, I'm just also super interested / obsessed in getting as much power out of a tiny chassis as I've always gravitated towards tiny SFFs for old gamer PC builds. The 810A can fit in my suitcase as well as I travel a lot between my apartment and my family members houses around the US as I also decide where I want to permanently "domicile" the server (I'm undecided if I want to keep it in my apartment, or maybe permanently keep it in my parents place where they have gigabit internet but I'm worried about full-time remote management). I've noted the importance of IPMI, you all have convinced me!

Also - you mentioned leveraging Quicksync, I assume for transcoding (Plex, etc.)? If you do that via the P630 onboard the Xeon you won't be able to also use it for display purposes (at least not if you are passing it through to a container via Unraid/etc.)
I'm coming from Windows so I'm a bit unfamiliar with all this. I have an Nvidia Shield connected to my AV receiver / 4K TV so I don't imagine connecting super often to display, but is there literally no way besides adding a GPU to get a video out from the server if using QuickSync as part of my Plex docker? If relevant - I'm not going to immediately move over to a full docker build, I'll probably start off with a Windows VM that pulls my existing 60TB library contained on StableBits Drivepool then slowly build out a full uNRAID solution until I feel comfortable ditching Windows.
 

Ixian

Member
Oct 26, 2018
88
16
8
If you pass a hardware device through to a VM or container it's usually exclusive, particularly with video cards. Technically the iGPU (P630) on that Xeon isn't a "video card" but from a hardware point of view it is. If for example you pass through Intel Quicksync to Plex (or Emby, or whatever) whether it's running in a VM or a docker container you'll lose use of that video output for other things.

If you are running Unraid on the box you don't need a video output, really, since Unraid is managed through a web console you can access with something else (even mobile). The only thing you'd need an output for is to troubleshoot, and if you get the board with IPMI you'll have that covered since you can get console access over the network too. You can mount USB drives, iso, etc. and do remote installs even; people do it all the time.

If you have a Shield then you have the client part settled, you don't also need to hook the server up to your TV. Use it as a NAS/Plex (or whatever) server and the Shield as a client. Tons of folks do that, works great.

Frankly this also removes the need to have this server in your TV cabinet which brings us back to...reconsidering the case :) As for suitcase portability, I personally wouldn't be comfortable hauling around even the 810a as-is, at the very least you should remove the drives and pack them separate, also you'll want to make sure things like CPU coolers/etc are secure in the case. I know people haul these things around but if you are talking about taking it on a plane/long ride a few times a year my point is neither solution is simple to port.

With the Node you can remove both drive cages (4 each) with a thumbscrew and bubble-wrap them, that's what I did when I moved houses a while back. I'll shut up about the Node now though because I'm starting to sound like a salesperson from Fractal and I'm not :)
 
Nov 17, 2020
24
1
3
If you pass a hardware device through to a VM or container it's usually exclusive, particularly with video cards. Technically the iGPU (P630) on that Xeon isn't a "video card" but from a hardware point of view it is. If for example you pass through Intel Quicksync to Plex (or Emby, or whatever) whether it's running in a VM or a docker container you'll lose use of that video output for other things.

If you are running Unraid on the box you don't need a video output, really, since Unraid is managed through a web console you can access with something else (even mobile). The only thing you'd need an output for is to troubleshoot, and if you get the board with IPMI you'll have that covered since you can get console access over the network too. You can mount USB drives, iso, etc. and do remote installs even; people do it all the time.

If you have a Shield then you have the client part settled, you don't also need to hook the server up to your TV. Use it as a NAS/Plex (or whatever) server and the Shield as a client. Tons of folks do that, works great.

Frankly this also removes the need to have this server in your TV cabinet which brings us back to...reconsidering the case :) As for suitcase portability, I personally wouldn't be comfortable hauling around even the 810a as-is, at the very least you should remove the drives and pack them separate, also you'll want to make sure things like CPU coolers/etc are secure in the case. I know people haul these things around but if you are talking about taking it on a plane/long ride a few times a year my point is neither solution is simple to port.

With the Node you can remove both drive cages (4 each) with a thumbscrew and bubble-wrap them, that's what I did when I moved houses a while back. I'll shut up about the Node now though because I'm starting to sound like a salesperson from Fractal and I'm not :)
haha no worries, I actually was so set on ordering the Node 804 in September until I found this little U-NAS case on Reddit and had to double check the dimensions as I didn't believe a 8 bay case was available with < 8" depth (my TV stand is only 12" deep). I get it, for all things practical it makes sense to go a little bigger (and probably it would've made sense from UNAS' perspective to make the 810A even slightly bigger).

You make a good point on the unRAID box really just being remote accessible while clients do everything else, no need to output video unless truly troubleshooting. In my head, I envisioned that since I have a PC sitting in my TV console (its the only place in my apartment it could go, there's 1 standing closet then a kitchen / bed in < 350SF), it could also work as a emulator box for playing Dolphin or any other random emulation / anything I wanted to use a "PC on a TV" use-case for. I was targetting a 8-SATA mobo to specifically think about adding a single-slot GPU later down the line if I find this a compelling use-case (in addition to a 10GBe card later on; would've loved if the Supermicro X12SCZ-TLN4F had 2 M.2 ports and 8 SATAs....that would've been a perfect board).

I certainly wouldn't be traveling with this often, maybe move it between locations once a year depending on where I'm staying (I work remote now so have this new luxury to choose which city I stay in a few months a year) and obviously take out the spinners and keep it in my backpack.
 

Ixian

Member
Oct 26, 2018
88
16
8
The Shield has a number of emulator options, there's also the tried and true Raspberry Pi solution - the 4 is cheap and quite powerful, will even run Dolphin well. You are way better off with a simple client/server model here - let the Unraid box handle NAS & docker duties as an application server, use the Shield as a client, maybe a Pi if you want more emulation/messing around options. Trying to get the Unraid box to do it all will be a mess - I know "converged home servers" were all the rage and maybe still are, but that route is full of compromises.

In your case trying to stuff even a smaller GPU in that box with everything else you have going on with it is probably going to be a bridge too far. Even that W1290T might be pushing it a bit even at 35w TDP. Speaking of which do you know how you are cooling it? 2u coolers don't fit in the 810a. You need a 1u or one of the rarer 1.5u hybrids.
 
Nov 17, 2020
24
1
3
Any issues with Noctua NH-L9i and 3*Noctua NF-S12A for cooling? I thought a 35W TDP wouldn't cause any issues at all as long as I keep cabling tidy (a few guys who've built in 810A I've met on Reddit / discord seem to say their cases run very cool while I think some guys on this site have said they've seen 80C+ on older 80W TDP Xeons).

I think you're right, the use case server display output is minimal. I'd like to keep the optionality open at a later date but for now it seems disabling the iGPU display output to keep it on passthrough for Plex isn't a huge issue. I have a Shield and Xbox Series X for games, maybe I'll just buy a Nintendo Switch as I really only use emulation to play smash Bros / Mario kart with friends using Xbox controllers.

If I have iGPU dedicated to Plex docker, can I still run a Windows VM; Or how does that work? What about anything else that may need the GPU (I would probably encode files using CPU but I guess you could also use GPU encoding...is it impossible to encode on GPU while Plex docker is running?)
 

Ixian

Member
Oct 26, 2018
88
16
8
For cooling, the biggest issue on the 810a is the weird-size 70mm fan on the side. You need good air intake and it's in a non-optimal spot. You should be fine with that CPU and a NH-L9i but if you start trying to stuff PCI cards in there too you may run in to heat problems.

You don't need to disable the display (not sure exactly what you mean - don't disable the onboard graphics for the CPU) just don't plug anything in to it.

If you do hardware passthrough of a GPU to a VM it won't be available for anything else. If you pass a GPU through to a docker other dockers can technically access it but you will run in to serious issues if more than one tries to access it at once (with Unraid, it will actually lock up the server). So basically treat it as dedicated unless you are good about micromanaging what uses what.

You can run a Windows VM without a GPU, you'll just lose accelerated graphics and other features if you don't have one, but it'll work fine emulated.
 
Nov 17, 2020
24
1
3


Found these machines while googling around, which are basically prebuilt versions of what I'm trying to accomplish but with more bays (8-12 HDD bays + 4 SSD hot-swap bays + built in 2*10GBe + 4*2.5GBe + 80W TDP Xeon W-1250 3.3GHz 6C/12T) in slightly larger chassis. Wondering what kind of GPUs these QNAPs support (edit - found compatibility list which implies no more than a Quadro P2000 or 1050ti) and if they support unRAID out of the gate vs. locked into QNAP's operating systems.

There's also an older, QNAP TVS-872XT, which is closer in dimensions to the 810A with 8-bays. And there is an TVS-873e with an older 2.1GHz AMD R-Series quadcore + 8GB RAM + AMD R7 GPU (curious how that compares with the Intel UHD P630 iGPU on the W-series / 10th gen consumer Intel chips).

Very interesting choices...maybe time to revisit my options, especially once I start thinking about price on the 810A build (35W 1.9GHz Xeon 1290T + Asrock W480D4U + 16-32GB ECC + Enhance 7660B 600W PSU + Connect-X3 10G + misc fans/cables) is getting close to the TVS-h1288X, without factoring the pain of building, warranty on the device and availability of some of those parts (chassis / CPU / PSU are hard to find).

UNAS 810ATVS-h1288XTVS-h1688XTVS-872N/XTNode 804TM D5-300c
Width12.4014.5614.5612.9613.548.94
Depth10.8312.5912.5911.0115.318.86
Height7.769.2411.967.4112.095.35
Volume (in3)1,041.381,693.792,192.391,057.332,506.95423.88
H1688X vs 1288X872XT vs. H1288X
Length vs. 810A2.162.160.561.14(3.46)1.60
Width vs. 810A1.761.760.184.49(1.97)1.58
Height vs. 810A1.484.20(0.35)4.33(2.40)2.721.83
Volume (in3) vs. 810A652.411,151.0115.951,465.57(617.49)498.60636.46
 
Last edited:
Nov 17, 2020
24
1
3

Great review of the unit, it’s $2000 which isn’t cheap at all but I think a very clean alternative to the 810A that I’m now leaning towards (wish it had PCI x16 slot and NVMe 4.0 M2 slots though).
 

Evan

Well-Known Member
Jan 6, 2016
3,346
601
113
Main issue I see if you have to use the qnap OS I assume , quick Google says something like ‘Linux station’ but that’s not the same I am sure and doing your own centos install or whatever.
 

Ixian

Member
Oct 26, 2018
88
16
8
That thing is a real 180 from what you were looking at building.

  • It is very expensive, even considering what you get with it
  • You'll be lucky to find one for sale (they appear sold out/unavailable in many places)
  • It runs QuTS, QNaps proprietary OS. There are apps and such for it like Plex, of course, but you are limited to them, their update schedule, etc. It also supports Docker/LXC so it is flexible in that regard, though the Qnap forums indicate some level of compatibility/other problems with that setup.

That isn't a box that's meant for BYO gpus, etc. You go with what it supports. It's an appliance, not an open ecosystem you can easily customize. Think about things like spares for power, etc. - the power supply in that, from the looks of it is proprietary; how will you replace it if it goes bad? Etc.

Not saying it's a bad idea, just that again, it is nothing like the BYO approach you were looking at and has a different set of pros/cons. It is a nice form factor, which is the advantage of going custom design like Qnap did for the motherboard/ps/etc. but on the con side limits you to their supply chain if something fails.Etc.