CLF's Micro-Lab

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
Background

I work as an Information Security Architect and am putting together a lab to hone exploit development, pen-testing, and other security skills. As a secondary goal, I will be working toward obtaining VMware credentials. I Took ICM back in 2012 but found it useless except for fulfilling the instructor-led course requirement for the VCP. Since I have never held a focused virtualization role (lots of side projects) some lab play is essential toward obtaining the VCP (and beyond).

I obtained the bulk of the gear for my lab in November-December of 2013, but 2014 was a very busy year - we moved into our new home in May 2014 and had our second child arrive mid-November. I will be taking advantage of 12-weeks of family leave over the next year, the first stretch of which begins soon. I hope to devote much of that time to study and lab work.

I initially planned on a C6100, but ultimately decided against it. We live in the California central valley and temps in summer spike to 110; placing a rack in the garage gives cause for concern. I chose the lab components based on small power, noise, and physical footprints. This turned out to be a very good decision, as my office is westward facing and I have a difficult time keeping it cool in the summer (I work exclusively from home).

The lab hosts are on the bottom shelf: a NAS and two ESXi nodes. Networking is provided by an SG300-10 (bottom), and point to point Infiniband cables. The UNAS 800 on the top shelf houses FreeBSD serving a 20TB RAIDZ6 zvol to my home; the top SG300-10 is for the office/home network.



The home we purchased was new construction; I had 24 Cat5e drops ran during the framing stage. All cabling (coax, Cat5e, Cat4 (phone), telco) terminates in a media closet. I had the home NAS there for a while, but the closet has no ventilation. I have a bid out to have it ventilated, for now the NAS and Lab reside in the shorty bookcase in my office.
 
Last edited:
  • Like
Reactions: Patrick and Marsh

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
Build’s Name: CLF's micro-lab (ESXi nodes)
Operating System: ESXi 5.5
CPU: Intel Atom C2750 Avoton
Motherboard: SuperMicro A1SAi-2750
Chassis: Rosewill Legacy V3 Plus-B
Drives: Seagate Constellation2 1TB, Crucial M4 128GB
RAM: 32 GB ECC (KVR13E9K2/8 x 4)
Add-in Cards: Mellanox ConnectX MHGH28-XTC
Power Supply: Antec EA-380D

Usage Profile: ESXi lab nodes

Other information:

It was a toss-up between these boards and the ASRock C2750D4I, but the lack of availability of the ASRock's in December 2013 made it an easy choice. I was actually leaning toward the ASRock due to the full sized RAM slots, but I'm very happy with SuperMicro's IPMI solution and their legendary customer service. So much so that I am standardizing on SuperMicro for all future projects.

Build Notes

I initially took inspiration from Nitro's thread, and started out with a pair of iStarUSA S21-20F2 chassis. Since these nodes were to house a Mellanox Infiniband adapter I ordered the iStar PCIe riser cards. Unfortunately, the PCIe 8x slot on the A1SAi is closed, rendering it physically incompatible with the 16x riser cards.



Back to the drawing board, I went with a pair of Rosewill Legacy V3 chassis powered by Antec EA-380D ATX PSUs. The build quality is not as nice as the iStar cases, but did alleviate a concern with the passively cooled A1SAi (or maybe just confirmation bias). The Rosewill's have mounts for 4 x 40mm fans. I had four in my parts bench, so I put two in each - I do not anticipate any issues keeping these platforms cool.

Current Status:

Hardware builds complete. The internal USB 2.0 ports are populated with 8 GB SANDisk USB drives which will house the ESXi install. The SSD/HDDs will be used for vSAN.

Next Steps:

Customize ESXi install ISO w/IB and i354 drivers. Install via IPMI to USB 2.0
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
Build’s Name: CLF's micro-lab (Storage)
Storage Platform: NexentaStor Community Edition 4.0.3
CPU: Intel Corei3-2310M
Motherboard: Jetway NF9G QM77 Mini-ITX Motherboard
Chassis: Mini-ITX 4-Bay Hot-Swap Chassis
Drives: Seagate Constellation2 1TB x 4, Crucial M4 64GB, Intel 311 20GB
RAM: 2x4GB G.Skill 1066 DDR3 SO-DIMM
Add-in Cards: Mellanox ConnectX MHGH28-XTC
Power Supply: Seagate SS-300M1U

Usage Profile: Block storage for ESXi nodes

Other information:

This build is a platform to test serving block storage over Infiniband to ESX nodes in my lab. I first evaluated OmniOS/Napp-It on this platform, but lack of VAAI pushed me to NexentaStor. Much of the platform is built on repurposed kit which will be reallocated to other projects once I have completed evaluation and settled on a NAS platform.

Finding a suitable board took some doing. I settled on the Jetway as it was mini-ITX, had six SATA ports, mSATA, and dual Intel LAN ports. It helped that I have plenty of DR3 SO-DIMM in my parts bin and a fairly nice chassis from a previous NAS. This is a temporary build, so no ECC. I scored the i3-2310M off ebay for $10 to drive the platform.

Initially I paired a refurbished Samsung Spinpoint from ebay with three existing units out of my parts drawer. By the time I got around to the build I discovered the "refurbished" drive was throwing S.M.A.R.T. errors, so I replaced the drives with a fresh lot of unused Constellation2 drives.

Build Notes

The CPU socket on this motherboard has a very easy to miss locking mechanism, which is not documented in the manual.



I do not have the words to express how much frustration this caused me. This was a real pain in the neck. It took me months to work through it. I nearly scrapped the whole project and purchased entirely new hardware, but I kept after it. I only spent $10 for the i3-2310M, so after a while I ordered a second board and a two new CPU to test with. To my absolute annoyance the exact same symptoms exhibited with the second lot of hardware. I reasoned that between two boards and three CPU it was highly unlikely to be defective hardware causing the issue. I scoured the manual and the interwebs looking for a solution, to no avail. I finally gave up and set up an RMA for the second lot of parts. Before dismantling everything and sending it back to the vendor, I gave it one last shot and spotted the screw on the socket. Finally, success!

It really doesn't help that the locking screw is similar in color to the plate on the socket, making it very difficult to distinguish unless you know what you're looking for. Perhaps it is obvious to some, but I'd never handled a G2 CPU; so it literally took me months of sporadic effort to figure this out.

Current Status:

I have Nexenta up and running on the host. The drives are currently mirrored with the M4 serving as L2ARC and the 311 as LOG/ZIL.

Next Steps

Get Infiniband working

Final Goal

ZFS NAS w/VAAI, serving iSCSI blocks via IPoIB.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,826
113
I think you need a NUC or two to keep the theme going! I cannot wait to see more.
 

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
I think you need a NUC or two to keep the theme going! I cannot wait to see more.
Thanks! I don't yet have any plans to expand the lab stuff ... going to finish up the VCP and focus on security for the remainder of 2015, but if I expand the lab footprint it's definitely going to be NUC/small footprint type stuff.

I have some definite plans to expand the storage infrastructure serving my home - I am planning to roll the Lab NAS and the Home NAS into VMs on a single hardware platform that is more capable than either of what I have now. I am running FreeNAS 9.3 on the he "Home NAS," but no inifiniband support until version 10.x (at least). I'm also not happy with the current setup - 8 4TB drives in RAIDZ2 is un-optimal .. I want to bump that to a 10 drive RAIDZ2, but the UNAS 800 limits drive expansion, so another chassis is needed. The zvol there is already larger than is permitted under Nexenta Community Edition, and Omni-OS/Napp-It lacks VAAI.

The larger zvol is serving BRD and DVD uncompressed MKV rips and old MCE recordings to various HTPCs, and will eventually house the recordings from whatever solution I settle on. ... I will likely end up hosting it under Omni-OS w/Napp-it, and use Nexenta to serve serve block storage to VMs. Or maybe getting everything virtualized expands the options as the IB HBA support in the storage appliance won't be an issue.

I'm kind of at a loss for a motherboard, though - was looking at a SuperMicro X10SL7, but expansion and storage is dicey with only two PCIe slots ... I would prefer to pass through a 9211-8i to the Nexenta VM and have a 6 drive RAIDZ2 w/ZIL and L2ARC, have a free slot for an Infiniband HBA, and a third for a SAS Expander. Looking at some other options, settled on nothing yet. Eventually an IB switch is going to go in the closet, as I can see minimum 4 hosts tied into the IB fabric.

Most everything else I have planned is media related - DIY whole-home DVR/media platform, perhaps some security cameras ... laying out the infrastructure will be the fun part.
 

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
What kind?
Offensive, I think. Pen testing, hypervisor cracking, and/or cloud security... it's pretty easy to build a VCP lab-in-a-box, but I specifically wanted real ESXi hosts so I can beat up on them once I've gotten past the VMware cert part. I've got a couple of GIAC certs already, plan on writing a Gold Paper and taking a couple of SANS vLive courses this year; I also want to start getting geared up for the GSE challenge.
 

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
1. Awesome setup :)
2. Want to sell me those PCIe Risers?
Thanks! I am finding it has it's limitations - I'm on chapter three of Scott Lowe's "Mastering" 5.5 and already thinking of how I can expand my lab with existing hardware. I have a NUC DC54327HYE and 16GB of RAM which is going to be drafted into service as a third ESXi host and another Jetway board which will likely be used as a second NexentaStor host to test Storage vMotion. (I really hate using my "production" home gear to run experiments, which rules out the FreeNAS platform I have hosting my software and media repository).

I am absolutely willing to sell you the riser cards, as they are collecting dust at the moment. Shoot me a PM, we can work out details. :)
 

Patriot

Moderator
Apr 18, 2011
1,451
792
113
Random note on the riser card... PCIE lanes are all independent and you can cut off the last 8 lanes and it will work fine... Literally grab a dremel and cut them off. Yes you will only have 8 lanes of data but you will still have 8 lanes of data. It is better than it not working.
For a lab server a few years back I cut off 15/16 lanes on a Geforce 6100 to give us higher res support on a ML330g6 and yes, it worked.
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
I got Infiniband up and running in the lab today. I have had a minor detour ... I scrapped the host I was using for Nexenta and purchased a used SM X8SIA-F w/L3426. My plan was to roll my homeNAS and labNAS into it, but after taking a power reading with my new Kill-a-Watt I decided I will use that board/cpu for another project. For the time being, it is going to host my LabNAS, but I power everything down when I'm not training on it.

Speaking of power readings I was fairly appalled to find that my "low power" FreeNAS/ZFS box is idling at ~83W, so my current obsession is to find another solution. I see myself moving away from ZFS and onto a Windows drive pooling sort of platform for the power savings. Current leading candidates are Windows Storage Spaces and SnapRAID. It's all fairly static content - BRD/DVD/CD rips, photos, documents, and my software repository. The resource requirements and power consumption of ZFS is a poor fit for my needs at home.

This "micro lab" concept is nice, but it leaves a bit to be desired. I have fully caught "Servethehome" bug, and am looking at putting a rack in my media closet. I have a pair of dedicated 15A circuits in there. I took measurements, and I have enough clearance between the wall-sockets and the bottom of the wiring cabinet to comfortably fit a 22U rack ...

Now I just have to see about passing the updated power specs to my cooling guy so I can get the closet properly cooled/ventilated for the amount of power available there.

:D
 
Last edited:

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
Good point. :)

Task for tomorrow, or maybe tonight after the kids settle down ... it's time to relieve the nanny for the afternoon. I don't have the bandwidth for much of anything with my four year old and three month old demanding my full attention.
 
  • Like
Reactions: Patrick

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
In stark contrast to the X8SIA/L3426 the A1SAi's are a dream. They idle at ~10W, even with what has to be super inefficient Antec EarthWatts 380's driving them.
 

namike

Member
Sep 2, 2014
70
18
8
43
I got Infiniband up and running in the lab today. I have had a minor detour ... I scrapped the host I was using for Nexenta and purchased a used SM X8SIA-F w/L3426. My plan was to roll my homeNAS and labNAS into it, but after taking a power reading with my new Kill-a-Watt I decided I will use that board/cpu for another project. For the time being, it is going to host my LabNAS, but I power everything down when I'm not training on it.

Speaking of power readings I was fairly appalled to find that my "low power" FreeNAS/ZFS box is idling at ~83W, so my current obsession is to find another solution. I see myself moving away from ZFS and onto a Windows drive pooling sort of platform for the power savings. Current leading candidates are Windows Storage Spaces and SnapRAID. It's all fairly static content - BRD/DVD/CD rips, photos, documents, and my software repository. The resource requirements and power consumption of ZFS is a poor fit for my needs at home.
~83W does not sound bad considering you have 8 drives running at all times. My ESXi Napp-all-in-one is running high 70's with 6 drives (2x Seagate 4TB NAS, 2x WD Green, and 2x Seagate 7200.10 500GB). This is with a 1230v3 and the SM X10SL7-F board with 32GB of ECC. What is the config of your FreeNAS box?
 
  • Like
Reactions: Patrick

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
~83W does not sound bad considering you have 8 drives running at all times. My ESXi Napp-all-in-one is running high 70's with 6 drives (2x Seagate 4TB NAS, 2x WD Green, and 2x Seagate 7200.10 500GB). This is with a 1230v3 and the SM X10SL7-F board with 32GB of ECC. What is the config of your FreeNAS box?
Intel S1200KP, w/Pentium G550 (65W TDP), 16GB ECC @ 1.5v, M1015/LSI 9211-8i, 8 x ST4000DM000 in a UNAS 800, so there are a couple of 120mm fans as well. I believe the G550 in combination with the drives are the culprits. 83W isn't an absurd number, but it's unnecessary for a 24/7 home NAS. It's also not so pressing that I need to rip it apart next week, but I'm definitely going to move to Windows to see how that affects the idle consumption. If that doesn't improve things I will look at a 1220l v2, but I'd like to see how the hardware performs under Windows before going that route.
 

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
I enabled powerd and set APM across all drives to 'minimal without spindown,' that dropped my idle from ~83W to ~56W. There is still BIOS power management and C-States tuning, but I still believe the G550 is showing it's age.

I still feel I would benefit more from migrating to Windows/Storage Spaces. I put the system together a few years ago before ReFS/SS was established. Always on RAIDZ2 is just not the best fit for my use case of serving media to my home - a tiered architecture would undoubtedly provide power savings.
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
I run SnapRAID at home for this type of data with AUFS to pool it. I set the drives up to spin down after 30 minutes of inactivity. So far this has worked great and I'm loving the flexibility of SnapRAID a plus a polling solution for home bulk media storage.
 
  • Like
Reactions: TuxDude

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
I run SnapRAID at home for this type of data with AUFS to pool it. I set the drives up to spin down after 30 minutes of inactivity. So far this has worked great and I'm loving the flexibility of SnapRAID a plus a polling solution for home bulk media storage.
+1 For power savings on a media server switch over to snapraid (or other similar tool if you prefer), and let the drives spin down. When a client plays some media a single drive spins up, then spins down shortly after the stream is complete depending on the timeout set.
 

CreoleLakerFan

Active Member
Oct 29, 2013
486
181
43
I run SnapRAID at home for this type of data with AUFS to pool it. I set the drives up to spin down after 30 minutes of inactivity. So far this has worked great and I'm loving the flexibility of SnapRAID a plus a polling solution for home bulk media storage.
It was one of your posts on SnapRAID that got me thinking along those lines. 56W isn't all that much in the grand scheme of things, but I'm headed towards having 2-3 servers powered on 24/7 ... if I can shave off 30-40W per server that starts to add up to quite a bit of savings.