Network: TK's home network / server cabinet(s)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
I've had a server cabinet at my house since 1999. It is in a constant state of change, and I occasionally document it on my web site. I've posted a few pictures here previously, usually to illustrate some point or to answer a question. A few months ago I realized I never posted the whole build, so I asked Patrick and he said I was welcome to post it here. So, without further ado:



The networking gear is in the top half of the left-hand rack:



From the top down are:
  • Fiber patch panel [advertising link]. 12 strands of fiber to one of my former off-site backup locations terminate in this panel. While that fiber is no longer in use, the patch panel [adverising link] provides a nice place to neatly coil up excess fiber.
  • Cisco 3845 router. Its primary purpose is to provide out-of-band serial connectivity (hence the "OOB1" label) to other equipment in this rack via a NM-16A card. A secondary function is 14 VoIP termination ports using an EVM-HD-8FXS/DID card with an EM-HDA-6FXO expansion module. Right now I am only using 4 of the 14 ports. Also included are an NME-AIR-WLC8-K9 wireless LAN controller card and an NME-NAM-120S network analysis module. Rounding things out are 4 T1 ports (no longer in use) on a pair of VWIC-2MFT-T1-DI cards, a VPN module and a compression module. Before I upgraded to dual ASR1001 routers, this unit was my primary gateway.
  • Dell PowerConnect 8024F and 8024 10 Gigabit Ethernet switches. These are stacked via 4 DAC cables, which you can see on the right hand side of the switches. This switch stack is trunked to the Catalyst 4948-10GE below via a pair of multimode fiber links for a total capacity of 20Gbit/sec. VLANs are used to keep traffic separated into griups. This leaves 38 ports for connecting various pieces of equipment. Most ports are available for future use - at present, the only 10 Gigabit links to end systems are the 4 RAIDzillas and the Dell PowerEdge [advertising link] R710 (both described below). At some point I will extend the 10 Gigabit network to other locations in the house.
  • PowerDsine PD-9024G/ACDC/M/F PoE (Power over Ethernet) injector. This is a device which is inserted between a network switch and client devices to power them over the Ethernet (rather than using "wall wart" power bricks). While this is a 24-port unit, I have only cabled 8 ports to the Catalyst 4948-10GE below. The first 4 ports provide power to four Cisco Aironet 702i access points, while the 8th port powers the clock in the rack to the right. This is the managed version of the unit, so it also has its own Ethernet connection to the Catalyst 4948-10GE (you can see this cable on the right). Using the management interface I can remotely control power to each connected device as well as perform other management tasks. This is a Gigabit Ethernet unit and can provide 36W of power to each of the 24 connected devices.
  • Cisco Catalyst 4948-10GE switch. This is my core network switch [advertising link]. It is connected to the Dell PowerConnect switches above via a pair of 10 Gigabit Ethernet multimode fiber links. It also connects to 3 other Catalyst 4948 switches throughout the property, via Gigabit Ethernet (the 10 Gigabit Ethernet is only used within these racks). The cables are color-coded - black is for network cables within the racks, blue is for network cables that go outside of the racks, green is for the serial console and yellow is for special purposes (this one connects to the Network Analysis Module in the Cisco 3845 router above).
  • A pair of Cisco ASR1001 routers. These connect my equipment to the outside world via Gigabit Ethernet and a 100Mbit/sec backup Ethernet. They are both trunked to the Catalyst 4948-10GE via a pair of Gigabit Ethernet cables. Those cables carry a half dozen or so VLANs. The routers use HSRP for redundancy, so either can fail (or be reloaded) without affecting connectivity.
The cabling is supported and organized with both Velcro ties and vertical cable management [advertising link] rings, visible on the right-hand side of this picture. Each cable has a "self-laminating" printed label (printed with a Brady TLS2200 label printer) identifying where it goes and what it is used for. Each piece of equipment is also labeled with its name (printed on a Brother P-Touch PC label printer).

Most people can make the front of a rack look good without too much effort. Having the back also look good is a bit more of a challenge:



All of the cables (except for fiber jumpers) are custom-made by me to the exact length needed. I can get fiber jumpers so inexpensively (I pay less for an assembled and tested duplex cable) than I would for one of the four connectors I'd need to make the jumper myself, so I haven't custom-made any fiber jumpers in probably 10 years or so. I still do termination of bulk fiber (you simply can't get a 100,000-foot piece of 192-strand fiber with the ends pre-connectorized) for work.

The rest of the equipment in the racks is documented in lots more detail on my web site - this post just goes into the networking parts.

Over the years the equipment has changed. The switches have been (in order, as I remember them):
  • Allied Telesyn (or Allied Telesis, depending on when you looked - 10BASE-T hub, not a switch)
  • N Base NH208 (an unpleasant switch from an unpleasant company who wasn't interested in patching responsibly-reported security vulnerabilities)
  • Cisco 2900M (couldn't walk and chew gum at the same time)
  • Cisco 2900XL (same problems as the M, but add-in cards were faster)
  • Cisco Catalyst 5505/RSM/NFFC (worked fine, but was a power pig, had an integrated RSP2-class router)
  • Netgear GSM7248 (an attempt to get a Catalyst 4948-class switch, but on a budget)
  • Cisco Catalyst 4948 (gave up on the Netgear, bought the switch I should have purchased in the first place)
  • Cisco Catalyst 4948-10GE plus Dell PowerConnect 8024
  • Cisco Catalyst 4948-10GE plus Dell PowerConnect 8024 and 8024F (current)
On the router side, there has been a similar progression:
  • Cisco 2501
  • Cisco 2600 family - 2610, 2611, 2651, 2651XM
  • Cisco Catalyst 5500 RSM (integrated router in switch)
  • Cisco 2800 family (2811, 2821)
  • Cisco 3800 family (3825, 3845 - the 3845 is still in use for some ancillary functions)
  • Ubiquiti Edgerouter 8 Pro (played with, never put into production)
  • Ubiquiti Edgerouter Infinity (likewise, beta unit)
  • Cisco ASR1001 (quite a big upgrade!)
  • Cisco ASR1001 * 2 (for redundancy)
All of the router and switch configurations have been tracked using RANCID and historical data back to September 2003 is available in the current on-line system. For data older than that, I'd need to go to archived data. In addition, I monitor performance and environmental data with MRTG. Monthly data going back 5+ years is online, while older data has been archived. I also have a locally-developed tool that collects all sorts of data via IPMI and presents it graphically. You can see some of it on my RAIDzilla 2.5 page.

Here are some pictures of the cabinet(s) from various earlier times:

January, 2000:



February, 2003:



September, 2005:



The door was permanently removed from the server cabinet (for cooling reasons) in 2001. The single cabinet was switched out to a dual cabinet (replacing the older-style single cabinet) in September, 2013 after quite a few months of planning. Here is an image saved from Visio which shows the plan at the time - some equipment and locations changed before the switch-over and some of it after:

There was also a detailed planning document, called "The Big Plan 2" (the original Big Plan was for a major FreeBSD software + hardware update). Here is a slightly-redacted version:
Code:
Any time:

X Blow dust out of both air conditioners from outside
X Clean filters on both air conditioners
X Make sure all FreeBSD systems have latest ports, kernel+world
X Create new port assignments for new-switch1 (4948-10GE) and new-switch2 (8024)
X Put that config on new-switch1

Sun 9/6:

X Remove log devices (SSDs) from rz1m/2/3
X Make sure RAID battery on gate charged properly; if not grab the one from the
  prior gate and make sure it gets installed before gate gets racked
X Dismount rz1/1m/2/3 from gate NFS and comment out those fstab entries
X Shut down rz1/1m/2/3
X Shut down ns0/2
X Shut down old-www
X Shut down sv
X Shut down TL4000
X Remove power, network, console, etc. cables from shut down systems
X Move rz1m/3, ns0/2, old-www, sv, tl4000 out of room
X Remove batteries from Symmetra XR packs
X Detach cables from XR packs
X Move XR packs to near bookshelf stand, (temporarily) re-install batteries
X Disconnect and safely store wap4, clock1
X Bring in proper socket from garage tools for attaching side rails

Mon 9/7:

X De-rack remaining systems and stack on floor of dining room
X Remove remaining network, power, console, etc. cables
X Remove batteries, power modules from Symmetra and set aside
X De-rack Symmetra and (temporarily) re-install batteries, power modules
X Move old cabinet to living room (temporarily)
X Deinstall old air conditioner from behind rack and place on back stairwell landing
X Install new air conditioner in existing sleeve - measure total depth for future
  removal w/o needing to move new racks
X Slide wooden base frame away from window until even w/ door frame
X Vacuum the whole room (or as much of it as can be reached), including ceiling
X Bring new cabinets from entryway and place on wood frame
X Install feet on cabinets, ensure level and room for wires under racks
X Bolt cabinets together
X Do final position adjustments of cabinets (make sure we have A/C clearance)
X Position front & rear rails, install 3rd set of rails (rear-facing)
X Bring old black & blue cabinets to entryway
X Install new SSDs in rz1/1m/2/3, install 10GE card in rz2

Tue 9/8:

X Install equipment in racks, starting with Symmetra in left and XR's in right
X Verify cabinet position, rail positions, cabinet level, A/C clearance
X Continue installing equipment in cabinets per drawing
X Party time!

Aftermath:

X Cable all of this stuff up again 8-(
X Move old cabinet (plus others) to entryway and call for pickup
X Vent fan in rear window (parts from basement / last year's failed project)
X 10GbE cabling
X Console (VGA / KVM) cabling
X Position temperature probes
X Krone blocks and frame, clean up wiring, mount in 2nd cabinet
X Try to neaten Ethernet / Cable Modem / Phone / whatever wires that enter racks
  so all nice like blue solid CAT 5.
X Put sides on
X Need powerd configured on rz3, rz4
X Connect audio extension cable from SV to speakers on bookshelf
X Run a couple of permanent Ethernet cables for test systems (wall jacks?)
X Install USB extensions for keyboard and mouse, so lap holdable
X Pictures!

Notes:

o RPB current draw with all systems powered (RPB1/2/3/4): 8/7/5/5
o UPS runtime with all systems powered: ~ 90 Minutes
Edited 9-Apr-18 to fix typo and use correct Visio design image, as well as adding information about RANCID, MRTG, and IPMI monitoring.
 
Last edited:

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
This is a proper home lab although it should come with a NSFS (not safe for spouse) warning.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Awesome setup. One question though. Why did you decide to use the wider rack and fixed shelving instead of the rails and regular 19" width racks/
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
Awesome setup. One question though. Why did you decide to use the wider rack and fixed shelving instead of the rails and regular 19" width racks/
Using the 23" cabinets (telco spec) gives me a lot of space in the left and right "gutters" for both cabling and reach-in access. I've used 23" racks on all projects where I've been able to supply the racks (some places insist that you use their cabinets, but even then 23" ones are usually available on request).

I'm not sure what you mean by fixed shelving - other than the 2 shelves in the right hand cabinet for non-rackmount stuff, everything is mounted with rack ears. Or do you mean the gold-colored bars? It is very hard to get a set of rails in the back adjusted for everything that wants to be supported on both the front and the back, and I don't like supporting heavy network equipment via the front ears only, even if that is what the manufacturer expects. The bars provide support at the rear of each device (there's a 3rd set of rails in the middle for shorter equipment). I also don't like the idea of bouncing a bunch of disk drives around by sliding a system out of a running rack, so the lack of slides on the RAIDzillas and some other equipment isn't a problem. The Dell R710 and TL4000 have what Dell calls "Static" rails, and the APC UPS stuff also doesn't have slide rails from the factory.

There have traditionally been 2 rack sizes - "Telco", which is 23" and regular, which is 19". The original Western Electric 23" racks had holes evenly spaced at 1" (you can actually see a reducer for this that got into the wrong box and used on the right side of the ASR1001s in my first picture). Subsequent racks (19" and 23") use EIA 1.75" unit spacing, either with or without the optional center hole. Some of my reducers are 3-hole-per-RU, others are 2-hole. There are some subtle issues with the 2-hole Newton reducers (4032xx30), particularly with semi-compliant equipment like some Cisco gear - there isn't enough clearance between the 2 reducers to mount some equipment. I had Newton design the 3-hole (20625xx30) reducers with a little more clearance. Before that, we had to hacksaw parts of the reducer flanges off. Not fun when you're doing it on the reducers for a 20-RU Cisco 7513 router - the Newton parts are hardened steel, not aluminum).

There are 2 other rack sizes you might encounter. Chatsworth (CPI) made some server stands with integrated KVM switches that were 25" wide, like the ones in this picture from 1997:



Facebook adopted a 21" rack for their Open Compute project for reasons unknown to me. I suspect that it is because you can (with a little squeezing) build a 21" wide rack that is the same 24" width on the outside as a standard computer room floor tile. Or it might be because a European standard is 535mm, which works out pretty close to 21" (it is off by under 1/10 of an inch).
 
  • Like
Reactions: nthu9280

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I'm a bit of a clock nut.
I'm disappointed not to see either a radio receiver or a GPS+PPS so you can have your own stratum 0 at home :)

That said, from the looks of it you're only a few small steps away from getting a hold of some stolen plutonium and a DeLorean...
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
I'm disappointed not to see either a radio receiver or a GPS+PPS so you can have your own stratum 0 at home :)
You mean like this, from my basement?





Single cesium tube primary, dual rubidium secondary, GPS tertiary. It doesn't work any more (combination of having been in storage + aging on the sources). And it only did time as a minor function - most of the cards in it generated timing pulses for T-carrier telecom circuits, since that's what it came out of (I got it for a steal because the telco was decommissioning it and lost the special HAZMAT box needed to ship it). I posted these pictures of it running at my house and got an email from the Project GREAT guy (carried 3 atomic clocks to the top of a mountain). He's also a Nixie tube fan (that's how he saw my pictures).
 
  • Like
Reactions: jak and istamov

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
OK, I stand very much corrected :D That's the kinda awesome kit that my company recently declined to pay for (although strictly speaking the multiple GPS+PPS we've got is fine for the millisecond accuracy we need stipulated by law), although really it was getting the VMs to live within this limit that was the tricky part of the project. But anyway that was what finally got me to make a proper stratum 0 source for the house.

Shame that in 2015 that caesium decay sources and rubidium oscillators aren't available in every corner drug store.

...although it doesn't really have any funky hardware to display it on so I'm mulling getting a nice Nixie clock (for readers in the EU, I bought my dad [an ex-telecoms engineer] one from PV Electronics plus the radio module and he loves it to bits, although avoid the LED backlighting if you can). I'm especially fond of your decatron pseudo-clock although those seem much harder to come by than Nixies and VFDs.
 

fractal

Active Member
Jun 7, 2016
309
69
28
33
Going completely off topic, you got NTP to run nice in a VM? Pray tell, HOW? The pool is full of folk who think they can, but can't. I don't even try. My stratum 0 is a low cost, low power box with a serial GPS with PPS.

But, back to topic. Nice racks. Much, MUCH cleaner than mine.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Going completely off topic, you got NTP to run nice in a VM? Pray tell, HOW? The pool is full of folk who think they can, but can't. I don't even try. My stratum 0 is a low cost, low power box with a serial GPS with PPS.
I should have clarified, the NTP servers themselves aren't running on VMs; our stratum 0's are a mix of GPS and radio clocks and we have some physical pizza boxes that present NTP to the wider network.

Time accuracy on the VMs feeding from these is good enough for out purposes; the MiFID II regs for our trading platforms sepcify an accuracy of no worse than ±1ms diversion from UTC which, after a bit of work (much of which I'm unable to relay currently) was easily achievable on our vmware platforms. Servers generally tick along with a ±0.3ms variance during business hours with an array of scripts for reporting, self-correction and alerting.

Eventually I gave up trying to find an affordable GPS+PPS at home and bit the bullet with a GPS+PPS PiHat.
 

marcoi

Well-Known Member
Apr 6, 2013
1,533
289
83
Gotha Florida
I should have clarified, the NTP servers themselves aren't running on VMs; our stratum 0's are a mix of GPS and radio clocks and we have some physical pizza boxes that present NTP to the wider network.

Time accuracy on the VMs feeding from these is good enough for out purposes; the MiFID II regs for our trading platforms sepcify an accuracy of no worse than ±1ms diversion from UTC which, after a bit of work (much of which I'm unable to relay currently) was easily achievable on our vmware platforms. Servers generally tick along with a ±0.3ms variance during business hours with an array of scripts for reporting, self-correction and alerting.

Eventually I gave up trying to find an affordable GPS+PPS at home and bit the bullet with a GPS+PPS PiHat.
i want to know what a GPS+PPS PiHat is.
 

zeyoner

New Member
Mar 17, 2018
9
0
1
41
i think it was mentioned - trading platforms. IE probably buying and selling stocks when ms of time can make or break trading profits.
Since 1999 from home? It just doesn't seem cost effective. Then again my knowledge of trading platforms is limited I really don't know enough to understand the need/want for it in a home environment. I think it'd make more sense to me if it was some sort of backup network at home for a business owner. The electric bill plus the use of electricity just seems wasteful in my opinion. Not that anyone is asking and I don't mean to rain on anyone's parade just trying to understand..