XenServer Home Lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

SGN

Member
Oct 3, 2016
36
11
8
Before we will go into main topic I need to say a few words about my previous projects. Till October, I was not aware of STH… big shame, but I think I saved a lot of money due to this lack of knowledge :)

So, my first server builds remember AMD K6-400 with the most popular, in that time, Linux Slackware. After that, there were lots of different building blocks. All the time it was one tower server and some desktop network gears.

Finally I went towards RACK mounted HW with much more server grade HW. Now we can talk about details.

Build’s Name: Compute Node 1
Operating System/ Storage Platform: XenSever 6.5 (awaiting update to 7.1)
CPU: Intel Pentium G3220
Motherboard: Asus P9D C/4L
Chassis: X-Case 208 Pro
Drives: 3xSamsung F1 1TB, 2xWD Blue 1TB
RAM: 32GB (4x8GB ECC)
Add-in Cards: Intel SRCSATAWB HW RAID Controller, 2x Intel 1GbE NIC
Power Supply: Seasonic SS- 400
20160831_165103_Easy-Resize.com.jpg
20160831_165056_Easy-Resize.com.jpg

Build’s Name: Storage Node 1
Operating System/ Storage Platform: FreeNAS 9.10
CPU: Intel Pentium D-1508
Motherboard: X10SDV-2C-7TP4F
Chassis: X-Case 212 Pro
Drives: 6xWD RED 3TB, 2x SM SATA-DOM 16GB
RAM: 16GB ECC
Add-in Cards:
Power Supply:
Seasonic SS- 400
Other Bits:

P1010204_Easy-Resize.com.jpg
P1010205_Easy-Resize.com.jpg
P1010206_Easy-Resize.com.jpg

Network: Grandpa Linksys SRW224-G4 + some other gears

UPS: APC SUA1500RMI2U


Use Cases:
I run roughly 10 VMs with some network services, WWW, Video streaming and storage apps.

P1010214_Easy-Resize.com.jpg
P1010216_Easy-Resize.com.jpg

Found problems:
Low temperature and silence
: They don’t come together. I spent countless hours to make this setup cool and silent. I put a lot of temp sensors and FANs. Whole rack is closed in some kind of a wardrobe (custom made for this purpose…). Right now D-1508 temp (with Noctua FAN) reaches ~60 degrees. The top temp inside the rack in the hottest point is around 35… In summer it will rise around 3-4 degrees. It’s not very silent but it’s not very loud. 1m from the rack it’s audible but in rooms it’s very good.
CPU load: I would like to play with much more powerful CPUs but it’s hard to justify any further purchase, while whole Compute Node load average is around 20%.
Network: Very old Linksys SRW224G4 is approaching final packet switch. This is first thing to replace.


What next?
10GbE network, for sure. I want to connect Compute and Storage nodes via 10GbE switch and put a link to my PC.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Welcome to the fray!

Looks like a great, low power setup.

I have never spent much time with XenServer so I always like reading about people's experiences with it.
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
10GbE bug strikes again... until you realize that for 2-3 servers you can do 40GbE dirt cheap. It's never ending.
 

SGN

Member
Oct 3, 2016
36
11
8
It was a while since my last update. Things are going forward, so new update of the setup and some more highlight about the infrastructure.
First thing is obvious - new switch. This time it's very popular model of TP Link T1700G-28TQ.
20171101_134307.png
New 10Gbit/s switch means a 10GBE NIC, so you won't be surprised if you see this little card. All of you knows her :)
P1010300_Easy-Resize.com.jpg
Now I would like to talk a little bit about the infrastructure. This one is very complicated and as thermal design of my rack is very demanding.
First: Rack is producing around 150-200W of the heat. It require a good ventilation. Regular wardrobe doesn't have it at all so I built it by my own.
RACK exhaust is located on the TOP of the RACK. There are 4 120mm 12V exhaust fans. The hot air is collected in the air duct and combined to a one common exhaust air duct with 2 150mm 230V duct fans. It's connected directly to the chimney. Air duct fans (150mm) are very noisy so this is treated as final solution during super hot days.
20171102_192808.png

20171016_221022.png
Please note that there was no intake airflow. There is just 45x5cm hole at the bottom of the wardrobe.

This deployment was working fine until storage node was mounted. It was not sufficient and the inside temp in the RACK was touching 42 degrees. It's way to high. I did some testing and found that air inflow is critical. No air inflow = game over.
So my next step was to introduce some inflow. I designed an Air Inflow System. It's build on top of 50x50 fans, some thermal sensors. I hate makeshift solutions, so some more fancy stuff is also there.
elements_Easy-Resize.com.jpg
mounting_plate_Easy-Resize.com.jpg
one_wire_Easy-Resize.com.jpg

Almost finished solution:
20170310_174421_Easy-Resize.com.jpg

Mounted at the final destination:
20170911_204506.jpg

Closed doors:
20170911_204401_Easy-Resize.com.jpg

It's really good solution if we are talking about temperature. At full speed it blows really nice. But... as you can imagine, full speed for 50x50 fans (even those silent ones) are too loud.

Next step:
I'm inspired by Ikea RACK posted by Alex711. I'm going to noise isolate all the air ducts, inside walls and the front door. I'll modify Air Intake System in such way that I will build some kind of a box, put there very silent 120mm fans and stick it together to the doors. Air will went through the box (with noise isolating mats) and will be blown horizontally on the servers fronts. Current AIS (with 50mm fans) will be mounted on top of this inside box to blow up for eg. switch. The project is complicated, so I will come here with the design and ask for comments

Key takeaways:
1. Air inflow is critical. Alone exhaust fans are not a good idea if you don't have any air intake.
2. Temp monitoring is a must have. You need to know what's going on to understand where to improve.
3. Fans at maximum speed are noisy - you need to have at least basic way to control them and find superposition of airflow and noise. You can start from analog voltage variable resistors.
 
Last edited:

SGN

Member
Oct 3, 2016
36
11
8
The work is progressing. At this moment focus is on noise isolation and improving air inflow.
First of all I almost completed sticking noise isolating mats to the case. I used 3cm thick, 140kg/m3 mats.
20171107_200512_Easy-Resize.com.jpg

I plan to put also additional layer of the mats right above the air ducts. Whole duct (also this vertical part) will be insulated, to eliminate any week point where noise can jump out.

The next open point is to improve air inflow. I have a plan how to do this but it's quite costly and time consuming so I need an proof of concept. Tests are ongoing.

I used old carton box and put 5x80mm fans mounted to 2U panel. It's sealed and cut where it's needed.
20171111_110634_Easy-Resize.com.jpg

I mounted it on the doors for the wooden case
20171112_193839_Easy-Resize.com.jpg
20171112_193901_Easy-Resize.com.jpg

Those fans have almost 3 times higher airflow (300m3/h) than previous solutions (8x50mm fans - ~100m3/h). Current results are great! Temp drooped 3 to 4 C! Top inside temp is around 32,5C while room temp is 22.
Final solution will be even more powerful as I would like to add 6x 120mm fans (600m3/h) and reuse old solution to blow in switch direction. This will add additional 100m3/h. Total throughput will be 700m3/h - with silence. This will be divided into stages, to ON and OFF the fans groups if needed. I'll return with the design once done.
And guys, you know what does lower temp and much better air flow means? ]:->
 

SGN

Member
Oct 3, 2016
36
11
8
I'm back with some updates. This time it is last stage of air flow improvements and noise insulation.
I have rebuild much of the Air Intake System and put more power. I'm really happy with the results. I can admit that I'm done for this topic, at least for now. I called it Air Intake System 2.
I put really lots of effort to make it solid. I also designed some "curse free" mounting. I just need to put AIS2 (It's cut from the bottom for perfect fit) on this ALU element and things like horizontals and verticals are done.
20180105_101738_Easy-Resize.com.jpg

Some pictures below.
The stuff: I ordered some custom made ALU plate for the fans + ordered wooden plates to mount things as I wanted.
20171229_133446_Easy-Resize.com.jpg

Everything is screwed and is really rock solid:
20171229_161756_Easy-Resize.com.jpg
20171229_161749_Easy-Resize.com.jpg
20171229_161741_Easy-Resize.com.jpg

Mounted fans:
20180105_101102_Easy-Resize.com.jpg

Other details:
20171229_161803_Easy-Resize.com.jpg

Closed AIR2 from the bottom (there is not so much space between sound foam, but it's working)
20180105_101007_Easy-Resize.com.jpg

And some final sealing from the bottom, to stop sucking air from the rack.
20180105_103518_Easy-Resize.com.jpg

Finally mounted AIS2:
20180108_185911_Easy-Resize.com.jpg
 
Last edited:
  • Like
Reactions: Patrick

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
Great work on the soundproofing. Threads like this always make me think twice about saving up for a Netshelter CX, but I don't think I have the dedication required to produce something on this level.
 

SGN

Member
Oct 3, 2016
36
11
8
Thanks guys!
Now I have some thermal capacity to add new stuff. This time, I plan to add bare metal pfSense.
I need your advice on HW selection. I consider 2 options:
1. Supermicro | Products | Motherboards | Xeon® Boards | X10SDV-2C-TP4F
2. A2SDi-4C-HLN4F | Motherboards | Products - Super Micro Computer, Inc.

Priorities:
1. As low as possible heat/power consumption footprint.
2. As much future proof as possible (10GBE)
3. Capabilities to fly with as high as possible t-put over OpenVPN

Both boards are almost the same prices, so not a big deal, but C3558 has 4x1GBE, while D-1508 has 2x1GBE+2x10GBE. There is also hidden cost in D1-508, as CPU alone has 9W higher TDP.
On the other hand, to add 10GBE card to C3558 i need to select ConnectX3 card which is one of the only cards with PCI-E 3.0 and 4 lanes. It has only one SFP+ port (this is going to be router, so 2 are welcome). It will also add some premium for power consumption.

Whats your recommendation?
 

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
I think either of those boards are good choices for a pfsense box in general. Where they will likely start to have issues is with high throughput OpenVPN as it's single threaded. You'll probably also find that routing 10gb will be a challenge. You'll get over 1gb but don't expect 10gb will be easy. Do you actually need to route/filter/etc. at 10g or would a switch serve you better?
 
  • Like
Reactions: SGN

SGN

Member
Oct 3, 2016
36
11
8
I don't even need 1G at this moment. I'm thinking about future use cases.
C3558 board has QAT, so it could be nice addon to OpenVPN performance.
At this moment I'm more on Atom board. Do we have anyone who used such board?
 

SGN

Member
Oct 3, 2016
36
11
8
New update!
It was a while since last update. New node on-line.
This time it's bare metal PfSense. Main goal was to move routing and FW out from VM. Why? It's slow to power on and not reliable if I need to do some maintenance with compute node. Now it's plug and play. No worries and no pressure that "there is, again, no Internet" ;)

Build’s Name: FW Node 1
Operating System/ Storage Platform: PfSense 2.4 series
CPU: Intel Atom C3558
Motherboard: Supermicro A2SDi-4C-HLN4F
Chassis: Supermicro SC505-203B
Drives: SM SATA-DOM 16GB
RAM: 8GB ECC

Next step.
For sure I need to start moving Compute Node to sth more powerful (and more power efficient). I need your advice. I have 2 options.
1. Go with Xeon D-1518. If I will run out of power then I put second node and play with cluster.
2. Put Xeon D-1537 and forget about lack of processing power.

My biggest and the heaviest workload would be to play 1 Plex 4K stream. There is plenty other things, but they are not very CPU power hungry.
Price wise there are no big differences between 2x1518 and 1x1537 (premium is on RAM sticks mainly)

P1010598s.jpg
 
Last edited:
  • Like
Reactions: Aestr

Aestr

Well-Known Member
Oct 22, 2014
967
386
63
Seattle
4K and plex is a tricky subject. There are a lot of posts over on the plex forums that get into the nuances, but the type of 4K files you have plays a very large part in how easily your system handles it. From the sounds of things GPU acceleration is very helpful for this so the recommendations seem to shift toward consumer CPUs rather than Xeons.

I'm just starting down the 4K path and I can say that my experience with Xeon D-1541's has not been great. I haven't played around with it enough yet to say that the problem is the CPU and not how I'm doing it, but again I think you'll find a lot of discussion on this topic over at the plex forums.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Does pfSense use the onboard C3000 NICs? I thought it was like FreeBSD 11.2 that supports them OOB so you could only use pfSense's appliance with it.

Is it still running Xen?
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
When people say "play a 4k Plex stream", do they mean that the 4k stream just needs to be punted out over the network to $playback_device, or that a file (presumably not already at 4k?) needs to be transcoded into a different format?

I don't do anything in the realm of Plex myself (I use HTPCs so no need for transcoding), but assuming H264 any output in 4k will need a substantial amount of CPU power - probably 8 cores minimum. ffmpeg on my 6-core Xeon v3 can only manage about 25-30fps on a simple 4k source going flat out across all threads using the fast present, (1080p content is a quarter of the pixels and can easily be done at 60fps) so you'd almost certainly require some form of acceleration in order for this to work.
 

SGN

Member
Oct 3, 2016
36
11
8
4k stream means for me: take a file with 4k resolution and stream it to TV over DLNA or other protocol. Personaly i use Serviio which uses ffmpeg at background.
Does pfSense use the onboard C3000 NICs? I thought it was like FreeBSD 11.2 that supports them OOB so you could only use pfSense's appliance with it.

Is it still running Xen?
Yes I use C3000 embeded NICs. PfSense will support it from FreeBSD 11.2 but right now there are drivers you can use with current PfSense version. Pls check this post.