PVE Cluster using UNAS 810A

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
This is a long thread and lots photo. Basic goal is create a home lab and also a home nas environment + home media hub

My Network Diagram
Drawing1.jpg

My network Equipment
network.jpg

My first build of the node is using UNAS NSC800 Chassis with Intel I7-6700T with ASRock C236 WSI. I was a bit disappointed due to the chassis only support mini-ITX and the board only support 32GB of RAM.
unas-0.jpg
unas-nsc800-1.jpg
unas-nsc800-2.jpg
unas-nsc800-3.jpg

Adding a 10GBe NIC is no hassle, however, I am not too fond of how air flow through CPU heatsink with this chassis design. So I sell it after a month of usage, though CPU never go above 40C due to its a I7-6700T CPU

My First PVE Node is using Silverstone CS280 with Asrock C236 WSI4-85.
hcs1_0.jpg
hcs1_a.jpg
hcs1_b.jpg
hcs1_c.jpg
hcs1_d.jpg
hcs1_e.jpg
hcs1_f.jpg
It is a All Flash Array with 6 Samsung Enterprise SSD 960 GB using E3-1585v5 CPU which intended for testing VDI solution provided by Intel GVT-G.


NSC 810a is a well design chassis for 8 bay nas due its air circulation and it supports a m-atx motherboard
nsc810a_1.jpg

It has two revisions, one with a side USB 3.0 port another revision with two side USB 2.0 ports
nsc810a_2.jpg

I have to take the back+case fan off in order to reorganizing back-plane 4 PIN power cable
nsc810a_3.jpg

Now there is no objects blocking the case fan air flow
nsc810a_3a.jpg

The revision with two side USB 2.0 Port
nsc810a_4.jpg

UNAS provided a PCIe expander, nice build quality. Durable and not easily ripped.
pcie-expander.jpg



My Second PVE Node is built with UNAS NSC 810a Chassis, Supermicro X11SAE-M and Interl E3-1275v6
hcs2_1.jpg

The board has two USB 3.0 ports which support the chassis with Front and Side USB 3.0 ports
hcs2_2.jpg

It also has USB 3.1 x2, DVI/DP/HDMI, USB 3.0 x2, USB 3.0 x2 and HD Audio Output, which is very suitable serve as a media hub/output center
hcs2_3.jpg

The PCI slot amused me.... who still using it anyway?
hcs2_4.jpg

Its M2.PCIe Slot, decide not to use it due to potential air flow issue ( doubtful capable of keeping nvme ssd cool)
hcs2_5.jpg

The board installed
hcs2_6.jpg

Intel 750 SSD and Mellanox Connect3x EN NIC is added via soft pcie-expander Intel 750 SSD is using as ZIL and Cache for the ZFS Pool
hcs2_9.jpg

PCIe Devices side view
hcs2_10.jpg

I do have to buy extension cables for this board inter to get front power sw/reset sw to work. Its PSU which also in included by UNAS
hcs2_11.jpg
I don't like this board that much due to it does not have USB 3.0 Type on the board itself and it is lack of ipmi feature. But I guess you can't have both multimedia features and server management features all in one board. The big boomer is that the board's default bios will not work with E3v6 cpu, you need an Intel 6th gen CPU or a E3v5 cpu to update the BIOS to latest version in order to get it working.
 
Last edited:

Markus

Member
Oct 25, 2015
78
19
8
I got a virtual pfSense-Appliance for some time in a 3 node-Cluster.

So some suggestions:
- you probably want to use CEPH with a small amout of your HDDs because of live-migration
- Keep in mind that beside live migration for host-upgrade use pfSense Clustering for pfSense reboots
- So you probably want a 3-Node-PVE-Cluster with Ceph for Live Migration combined with a 2-node-vm-pfsense-Cluster

Regards
Markus
 

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
My third PVE Node is built with UNAS NSC 810a Chassis, Supermicro X11SSH-LN4F and Interl E3-1245v6

Intel 750 SSD is using as ZIL and Cache for the ZFS Pool

The board installed without hassle. I really like this board and I do not have to use any cable expander and also the board work with E3v6 CPU without updating the BIOS
hcs3_1.jpg

Install Intel 750 SSD with poor quality pcie soft expander (not from UNAS)
hcs3_2.jpg

Install Mellanox Connect3x EN 10 GBe NIC
hcs3_4.jpg

Everything is installed and the Promox is using a SanDisk Ultrafit Disk onboard USB TypeA as
hcs3_4a.jpg

PCIe Device Side View
hcs3_5.jpg

Backview
hcs3_7.jpg

Mellanox Connect3x EN CX 312A
mellanox_1.jpg

Mellanox Connect3x EN CX 312A
mellanox_2.jpg

My PVE node 1 and PVE node 3 inside my TV cabinet, still has addition re-cabling (eth) work to due and also addition more 9cm USB FANS to increase the air flow inside the cabinet
hcs1_hcs3.jpg
 

Attachments

Last edited:

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
I got a virtual pfSense-Appliance for some time in a 3 node-Cluster.

So some suggestions:
- you probably want to use CEPH with a small amout of your HDDs because of live-migration
- Keep in mind that beside live migration for host-upgrade use pfSense Clustering for pfSense reboots
- So you probably want a 3-Node-PVE-Cluster with Ceph for Live Migration combined with a 2-node-vm-pfsense-Cluster

Regards
Markus
Thanks for your suggestions, I have tried Ceph storage with current setup and its performance is miserable even with 10 GBe networks. I guess it just does not have enough nodes and also OSDs.

I am using dual router with dual WAN. One is Unifi USG (really not recommended) and one is pfSense using two Eth port pcie-passthrough for optimal performance.

All my home clients is using USG as their default gateway and all my servers and its VM/LXC is using pfsense as their default gateway.

I do have a second pfSense VM on watch dog service on my pve clusters. Due to I am using pcie-passthrough for it, I cannot use the living migration feature. Two of my pve nodes has quad eth ports and all of them has 10 GBe ports, so I can spare two physical gbe eth port to my pfSense VMs.

For DFS, I am using GlusterFS 3.11, it works very well for LXC migration.

For living migration, I am using ZFS over iSCSI using SCST, eventually I am going to switch to iSER.
 

K D

Well-Known Member
Dec 24, 2016
1,439
320
83
30041
Thanks for your suggestions, I have tried Ceph storage with current setup and its performance is miserable even with 10 GBe networks. I guess it just does not have enough nodes and also OSDs.

I am using dual router with dual WAN. One is Unifi USG (really not recommended) and one is pfSense using two Eth port pcie-passthrough for optimal performance.

All my home clients is using USG as their default gateway and all my servers and its VM/LXC is using pfsense as their default gateway.

I do have a second pfSense VM on watch dog service on my pve clusters. Due to I am using pcie-passthrough for it, I cannot use the living migration feature. Two of my pve nodes has quad eth ports and all of them has 10 GBe ports, so I can spare two physical gbe eth port to my pfSense VMs.

For DFS, I am using GlusterFS 3.11, it works very well for LXC migration.

For living migration, I am using ZFS over iSCSI using SCST, eventually I am going to switch to iSER.

Cool, Very similar setup here. 5 unifi switches and a usg for dual wan. Had issues with pfsense dhcp offers in any place other than vlan1. Never did solve the problem.
 
  • Like
Reactions: EluRex

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
Cool, Very similar setup here. 5 unifi switches and a usg for dual wan. Had issues with pfsense dhcp offers in any place other than vlan1. Never did solve the problem.
For all my VM and LXC, I use static IP pointing them to my pfSense.

For all other home clients and other IoT devices I use USG as DHCP service to distribute the IP pool. I am really not that fond of USG for lacking various features including IPv6
 

Bebopbebop

New Member
Aug 17, 2017
6
1
3
72
I signed up for this forum specifically to ask what you did to get the Molex connectors so much better organized.

I'm referring to the picture right above your post that reads "Now there is no objects blocking the case fanair flow"

I've looked at the photo a dozen times and I have my case open right now and I can't for the life of me figure it out.

Can you explain what changes you made once you cut off the original cable ties?
 
Last edited:

stsmith5150

New Member
Apr 11, 2018
6
0
1
54
I have a ZOTAC mITX board...would you go with the 810 or the 810A. I hear that mitx will fit in the mAtx.
 

Caleb

Member
Nov 16, 2015
39
8
8
36
Did you passthrough a NIC for pfSense on Proxmox? I tried not using a dedicated NIC for pfSense WAN, but it kept killing my cluster (corosync).
 

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
pFSense always best to passthrough a dedicated NIC to it. In addition, its best to seperate your clsuter +storage network from your client facing network (use vlan). That is what I do @ home.
 
Nov 17, 2020
24
1
3
@EluRex this is an amazing build. How are temperatures for CPU / HDD / etc. given you have two large PCI devices in such a small case? I'm starting a build in this case so trying to figure out which parts I want that result in best temperatures while leaving optionality open down the line for two PCI devices (10G card + GPU).

Also curious what you meant by "increase backplane hdd stability mod" as the above poster asked.