I figured it is time to start a little build log. This is going to be both our backup DR site as well as a site to use during our reviews.
Summary Stats
Colocation Provider: HurricaneElectric
Size 42U
Power: 15A (yea low!) of expensive Silicon Valley power
Bandwidth: 100mbps unmetered
Basically this is a cabinet with the bare minimums. We may take on a few boxes to help defray some costs. The nice thing is that HE is a big operation so they have 24x7 access.
Interestingly enough, with all of the lower power servers we are now using, we can actually fit way more than we expected.
Todos
Space Summary
Front of rack - 5U / 42U used
Rear of rack 7U / 42U used
Completely empty rack U's left (nothing on front or rear) 34U
4/22 - Move in
We started with a completely bare cabinet. The HE standard is round holes but they put in square for us. The vertical square hole mounting rails were not even tied down, the was a Tripp Lite Zero U PDU but that is it. These cabinets are barebones:
After we got everything setup from a physical layout perspective, we started by mounting five servers.
1. Xeon E5 V2 dual in a 2U
2. Xeon E5 V2 dual in a 1U
3. Supermicro A1SRM-LN7F-2758 as pfsense node
4. Atom C2750 in a 1U
5. Xeon E3-1125C in a 1U
And two switches 2x HP V1910-24G
This is certainly a decent starting cluster if nothing else. Since we expect to use many lower-power servers instead of the fewer dual socket servers that we have in Las Vegas, a different color scheme was used for network cables. Green = management then we are alternating cable colors for the gigabit ports going to each HP switch. all three cables are bundled using a different color tie down.
A few examples - the 2U is a Green, Blue. Blue bundle with black velcro tie downs. The 1U above it is Green, Red, White with Red velcro tie downs. The server above it is blue and black (no IPMI but we will have a console server that will get a green cable) with a green velcro tie down. For a single rack of servers, it makes life really easy knowing which cable goes where. It also means it is easy to trace which data cable is going to which switch.
And a quick front shot:
Certainly plenty of physical room to grow. Current power usage is around 3A so there is also a lot of power headroom. Also, there is a missing Samsung 843T 480GB drive in the middle 1U server right now. That SSD is still plugged into my main workstation and I forgo to write it down on the checklist.
Network Configuration
I have been using pfsense at home for about six years and in the colo for over two. I got the IP address range from HE and plugged everything in but absolutely nothing trying to ping the outside world. After 3 hours of thinking my pfsense skills were even more meager than previously expected, I sent a ticket confirming the IP address range. We made a small change to stuff on 4/20 and I was desperate. It was an error on their end and they had fixed and responded in under 5 minutes. Interestingly this happened to us with the Las Vegas colo the first time around too.
The switches are likely to see a bit of swapping out. There is a Mikrotik CRS226 (24port GbE and 2 port SFP+ 10Gb) switch on the bottom of the rack and a Dell 8132 24 port 10Gbase-T switch also arriving today.
4/23 - Remote Configuration
Today all of the work is going to be done remotely. The datacenter is only about 20-30 minutes from my normal commute spots, but after 6 hours yesterday, I can stand to hear fewer fans!
Networking
Fixed an error in the VPN configuration. I had the wrong subnet selected in one of the VPN settings which was causing an issue.
Setting up the cluster
Decided to install Proxmox VE 3.4 on the Intel Atom C2750 1U machine. Both SanDisk CloudSpeed Eco 960GB drives were configured in ZFS RAID 1 mode so there is now about 900GB of storage accessible. I am going to use this for administration for the time being and to help setup the remainder of the nodes.
4/26 - New Shelf and Cluster
Had a few milestones here. From a physical perspective, we added a mid-rack shelf for holding a laptop, an Intel NUC and any smaller shelf mounted devices.
New hardware
Added an Intel NUC for a little in-rack management box
Added a dual Intel Xeon E5-2699 V3 system with 128GB of RAM, 2x Intel S3700 400GB drives, 2x Intel S3500 800GB drives and a Fusion-io ioDrive 160GB SLC card.
One of the dual Xeon E5 V2 systems has been misbehaving, it is now marked for replacement
New dual Xeon E5-2698 V3 system with 128GB of RAM, in testing getting ready for deployment. This is a 24-bay 2.5" chassis and already has a SAS 3108 controller onboard and 5x 15K.5 300GB Seagate SAS 12gbps drives installed. Next will likely be SAS SSDs to accompany the system.
Networking
Site-to-site VPN is now functional and is very nice! Has made administration significantly easier.
The Dell PowerConnect 8132 is a great switch but I am missing inner rails and they are almost impossible to find.
Setting up the cluster
I hit an error with proxmox. The easy answer is I need to re-install on two nodes that do not have IPMI on. This is non-trivial but will do the next time I am in Fremont.
4/29 More Setup
Networking
Site-to-Site is now connected between Vegas and Fremont now! Now from home I have site to site and from each DC there is site to site setup. Excellent!
Summary Stats
Colocation Provider: HurricaneElectric
Size 42U
Power: 15A (yea low!) of expensive Silicon Valley power
Bandwidth: 100mbps unmetered
Basically this is a cabinet with the bare minimums. We may take on a few boxes to help defray some costs. The nice thing is that HE is a big operation so they have 24x7 access.
Interestingly enough, with all of the lower power servers we are now using, we can actually fit way more than we expected.
Todos
- Setup the other servers present
- Add more nodes
- Get 1ft cables and longer cables for bottom of rack
- Get STH site #2 up and running!
Space Summary
Front of rack - 5U / 42U used
Rear of rack 7U / 42U used
Completely empty rack U's left (nothing on front or rear) 34U
4/22 - Move in
We started with a completely bare cabinet. The HE standard is round holes but they put in square for us. The vertical square hole mounting rails were not even tied down, the was a Tripp Lite Zero U PDU but that is it. These cabinets are barebones:
After we got everything setup from a physical layout perspective, we started by mounting five servers.
1. Xeon E5 V2 dual in a 2U
2. Xeon E5 V2 dual in a 1U
3. Supermicro A1SRM-LN7F-2758 as pfsense node
4. Atom C2750 in a 1U
5. Xeon E3-1125C in a 1U
And two switches 2x HP V1910-24G
This is certainly a decent starting cluster if nothing else. Since we expect to use many lower-power servers instead of the fewer dual socket servers that we have in Las Vegas, a different color scheme was used for network cables. Green = management then we are alternating cable colors for the gigabit ports going to each HP switch. all three cables are bundled using a different color tie down.
A few examples - the 2U is a Green, Blue. Blue bundle with black velcro tie downs. The 1U above it is Green, Red, White with Red velcro tie downs. The server above it is blue and black (no IPMI but we will have a console server that will get a green cable) with a green velcro tie down. For a single rack of servers, it makes life really easy knowing which cable goes where. It also means it is easy to trace which data cable is going to which switch.
And a quick front shot:
Certainly plenty of physical room to grow. Current power usage is around 3A so there is also a lot of power headroom. Also, there is a missing Samsung 843T 480GB drive in the middle 1U server right now. That SSD is still plugged into my main workstation and I forgo to write it down on the checklist.
Network Configuration
I have been using pfsense at home for about six years and in the colo for over two. I got the IP address range from HE and plugged everything in but absolutely nothing trying to ping the outside world. After 3 hours of thinking my pfsense skills were even more meager than previously expected, I sent a ticket confirming the IP address range. We made a small change to stuff on 4/20 and I was desperate. It was an error on their end and they had fixed and responded in under 5 minutes. Interestingly this happened to us with the Las Vegas colo the first time around too.
The switches are likely to see a bit of swapping out. There is a Mikrotik CRS226 (24port GbE and 2 port SFP+ 10Gb) switch on the bottom of the rack and a Dell 8132 24 port 10Gbase-T switch also arriving today.
4/23 - Remote Configuration
Today all of the work is going to be done remotely. The datacenter is only about 20-30 minutes from my normal commute spots, but after 6 hours yesterday, I can stand to hear fewer fans!
Networking
Fixed an error in the VPN configuration. I had the wrong subnet selected in one of the VPN settings which was causing an issue.
Setting up the cluster
Decided to install Proxmox VE 3.4 on the Intel Atom C2750 1U machine. Both SanDisk CloudSpeed Eco 960GB drives were configured in ZFS RAID 1 mode so there is now about 900GB of storage accessible. I am going to use this for administration for the time being and to help setup the remainder of the nodes.
4/26 - New Shelf and Cluster
Had a few milestones here. From a physical perspective, we added a mid-rack shelf for holding a laptop, an Intel NUC and any smaller shelf mounted devices.
New hardware
Added an Intel NUC for a little in-rack management box
Added a dual Intel Xeon E5-2699 V3 system with 128GB of RAM, 2x Intel S3700 400GB drives, 2x Intel S3500 800GB drives and a Fusion-io ioDrive 160GB SLC card.
One of the dual Xeon E5 V2 systems has been misbehaving, it is now marked for replacement
New dual Xeon E5-2698 V3 system with 128GB of RAM, in testing getting ready for deployment. This is a 24-bay 2.5" chassis and already has a SAS 3108 controller onboard and 5x 15K.5 300GB Seagate SAS 12gbps drives installed. Next will likely be SAS SSDs to accompany the system.
Networking
Site-to-site VPN is now functional and is very nice! Has made administration significantly easier.
The Dell PowerConnect 8132 is a great switch but I am missing inner rails and they are almost impossible to find.
Setting up the cluster
I hit an error with proxmox. The easy answer is I need to re-install on two nodes that do not have IPMI on. This is non-trivial but will do the next time I am in Fremont.
4/29 More Setup
Networking
Site-to-Site is now connected between Vegas and Fremont now! Now from home I have site to site and from each DC there is site to site setup. Excellent!
Last edited: