Kal's Home Lab v2.0

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Kal G

Active Member
Oct 29, 2014
160
46
28
45
After selling my venerable C6100 over a year ago, I've been finding my single home server too limiting for serious lab work (ie. HA, VSAN, etc.). It's finally time to remedy that.

Requirements: 10 GbE, minimum of three nodes, small footprint, and quiet (must pass muster with my wife)

To that end, here's what I'm thinking:

ESXi Nodes (x3):

Motherboard: Supermicro X10SDV-TLN4F (D-1541 CPU)
Chassis: Logic Supply CT100 (temporary)
Drives: 2x Samsung PM853T 960GB (capacity), Intel S3710 400GB (cache), 16GB SanDisk Cruzer Fit (OS)
Drive Chassis: Chenbro 12x 3.5" drive cage from an RM235 series chassis (temporary)
RAM: 2x 32GB Samsung DDR4-2133 Registered ECC
Add-in Cards: Intel X520-DA2 (one node only)
Power Supply: SeaSonic SS350M1U 350W Flex ATX (temporary)

Zero Clients: 2x HP T310 (Tera2 PCoIP)

Network: Cisco WS-C3560CX-8XPD-S (fanless with 2x 10GBase-T and 2x SFP+)

To explain, the CT100 is a Mini-ITX test bench with room for a Flex ATX power supply. I'll use it to get up and running while I test various small chassis. The Chenbro drive cage will also go away once a final chassis selection is made.

I'll post pictures as I go.
 
  • Like
Reactions: MiniKnight

MiniKnight

Well-Known Member
Mar 30, 2012
3,077
976
113
NYC
Are you looking for feedback or is it just the first of many? Make another post to save room for pics1
 

Kal G

Active Member
Oct 29, 2014
160
46
28
45
Good idea. My post was the first of several. That said, feedback is always welcome.
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
42
Initial test fit. Thanks to @Patrick for the idea.

View attachment 4018

Only downside is the bottom node has to be powered on for the other two to receive power. Ordered a picoPSU to test with.
You mad scientists/geniuses! Are those extended standoff's for the triple stack o' Xeon-D 3-node all-in-one chassis/case setup? How do you deal w/ or provide pwr, seperate pwr supply bricks/picoPSU's?
 
  • Like
Reactions: Chuntzu

PigLover

Moderator
Jan 26, 2011
3,219
1,582
113
Hey - maybe you could run some open source cloud software on that and call it an "openstack" :)
 
  • Like
Reactions: Hank C

Kal G

Active Member
Oct 29, 2014
160
46
28
45
Are those extended standoff's for the triple stack o' Xeon-D 3-node all-in-one chassis/case setup? How do you deal w/ or provide pwr, seperate pwr supply bricks/picoPSU's?
I've been toying with the idea of placing all three nodes in a single 2U chassis (think C6100). In that design, power would be provided by a 12V server PSU and distributed to each node. With the 12V connectors on each board, no picoPSU would be required. This would allow me to use quieter, 80mm fans and place hot swap drive arrays in the front of the chassis. The downside is, I'd lose the use of the PCIe slot on two of the three nodes. Also, I'm a terrible metalworker. I'd have to pay someone to build it for me.

More than likely, I'll go with a commercial solution. The two current contenders are the Supermicro SC113MFAC2-605CB and Logic Supply Logic Supply MC600. The SC113 has the advantage of a hot swap backplane with NVME and the option to use a larger board later. The MC600 has the advantage of being smaller, likely quieter (have to test airflow though), and allows the use of two PCIe cards.

I have the MC600 sitting on my bench. Once the picoPSU arrives, I'll post more about my experience with it.

Hey - maybe you could run some open source cloud software on that and call it an "openstack" :)
I must still be tired. I actually found that amusing. :)
 
Last edited:

Kal G

Active Member
Oct 29, 2014
160
46
28
45
Anybody else notice the new bifurcation options on the Supermicro X10SDV series?

- x4x4x4x4
- x8x4x4
- x4x4x8
- x8x8

The quad x4 option might give us the opportunity to enable five NVME SSDs using a Supermicro AOC-SLG3-4E4R adapter and an M.2 to SFF-8643 adapter.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,077
976
113
NYC
Anybody else notice the new bifurcation options on the Supermicro X10SDV series?

- x4x4x4x4
- x8x4x4
- x4x4x8
- x8x8

The quad x4 option might give us the opportunity to enable five NVME SSDs using a Supermicro AOC-SLG3-4E4R adapter and an M.2 to SFF-8643 adapter.
Last I tried needed plx card
 

Kal G

Active Member
Oct 29, 2014
160
46
28
45
picoPSU arrived. :D Performed some quick, non-scientific, tests using the Logic Supply MC600 case. CPU stress test was performed using a Mersenne Prime test with 16 threads.

Note, this build uses an Intel X520-DA2, picoPSU-160-XT, and the stock CPU fan has been replaced with a 50mm Gelid Silent 5. I'll post the test results using the stock fan later.

IMG_3920a.jpg

D-1541 operating temperature: 0-108°C
Room temperature: 19.5°C

Temperature while idle: 36°C
Temperature under load: 82°C :eek:

While the temperature under load is well within the processor's operating range , it is higher than I would like. The MC600 case was designed for cross-case airflow from right to left. This doesn't work well with the front to rear memory layout on the X10SDV. So, since hot air rises, I removed the top cover.

Temperature while idle (cover removed): 33°C
Temperature under load (cover removed): 69°C

This is much better and is more in line with what I expected.

My first impression is that this case works for the X10SDV-TLN4F, but I would need to cut ventilation into the chassis cover and stacking these cases would then be an issue.

Quick note on power: this case has a cutout for the barrel connector provided with the picoPSU. However, the DB9 cutouts that should work with the 4 pin power connector used with the higher wattage power bricks will not fit if there is a PCIe card. The best option would be to widen the barrel connector cutout and drill holes to hold the adapter in place.
 

Kal G

Active Member
Oct 29, 2014
160
46
28
45
Update on temperatures with the MC600.

Using the stock fan, the temperatures were roughly 1°C higher than with the Gelid.

As purchased, the MC600 won't work for my purposes, but it turns out that the front faceplate is removable. Only took a few minutes to temporarily rig the included fan bracket to push air from the front of the case (also removed the I/O shield to allow the hot air to egress).

IMG_3928a.jpg

Yes, those are twist ties holding the bracket in place. :D

Temperature while idle (front fan mount): 31°C
Temperature under load (front fan mount): 68°C

Keep in mind that this is using the cheap fans that came with the case (one Sunon, one unmarked). Nevertheless, the results are promising.

I'm thinking a third fan to cool the PCIe cards might be a good idea as well. Next I need to fabricate a test bracket to mount all three fans and replace the front faceplate. Would be nice be able to reuse the existing power button/USB board as well.
 
Last edited:
  • Like
Reactions: MiniKnight

Kal G

Active Member
Oct 29, 2014
160
46
28
45
I finished making a faceplate template out of cardboard. Not the ideal material, but a lot cheaper to replace if I screw up. If I decide to move forward with this case, I'll make a final version out of metal or acrylic.

IMG_3930a.JPG

This uses four Noctua 60mm PWM fans. I may only use three on the final version (remove one of the two cooling the PCIe slots). Strangely, any fan connected to the FAN4 header seems to run several hundred RPM slower than those connected to FAN1 and FAN3 (FAN2 is connected to the CPU fan). In this case, FAN4 ran at 1900 RPM while FAN1 and FAN3 ran at 1400 RPM.

Initial testing puts the temperature under load at 73°C

While I like how quiet the Noctua fans are, they don't move as much air as the stock fans that came with the case. That said, the internal wiring isn't as clean as I would like. Have to see if it can be cleaned up more.

* Edited to fix RPM typo.
 
Last edited:

Kal G

Active Member
Oct 29, 2014
160
46
28
45
After testing the MC600 with the fan modifications for several days, I've decided to move ahead with the build and ordered two more cases. I'm also going to try using a 10GBase-T SFP+ module in the switch instead of the Intel X520-DA2 card in order to use the same configuration for all three nodes and free up a PCIe slot. I have some ideas on how to better utilize the M.2 slot as well.

More to follow.