STH Colocation Number 2 Build Log

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I figured it is time to start a little build log. This is going to be both our backup DR site as well as a site to use during our reviews.

Summary Stats

Colocation Provider: HurricaneElectric
Size 42U
Power: 15A (yea low!) of expensive Silicon Valley power
Bandwidth: 100mbps unmetered

Basically this is a cabinet with the bare minimums. We may take on a few boxes to help defray some costs. The nice thing is that HE is a big operation so they have 24x7 access.

Interestingly enough, with all of the lower power servers we are now using, we can actually fit way more than we expected.

Todos
  • Setup the other servers present
  • Add more nodes
  • Get 1ft cables and longer cables for bottom of rack
  • Get STH site #2 up and running!
Total idle is under 4A right now so less than 1/3 of what we have usable. Lots of room to grow.

Space Summary
Front of rack - 5U / 42U used
Rear of rack 7U / 42U used
Completely empty rack U's left (nothing on front or rear) 34U

4/22 - Move in
We started with a completely bare cabinet. The HE standard is round holes but they put in square for us. The vertical square hole mounting rails were not even tied down, the was a Tripp Lite Zero U PDU but that is it. These cabinets are barebones:

STH Fremont colo move in - empty cabinet start.jpg

After we got everything setup from a physical layout perspective, we started by mounting five servers.
1. Xeon E5 V2 dual in a 2U
2. Xeon E5 V2 dual in a 1U
3. Supermicro A1SRM-LN7F-2758 as pfsense node
4. Atom C2750 in a 1U
5. Xeon E3-1125C in a 1U

And two switches 2x HP V1910-24G

STH Fremont colo move in - racked rear.jpg

This is certainly a decent starting cluster if nothing else. Since we expect to use many lower-power servers instead of the fewer dual socket servers that we have in Las Vegas, a different color scheme was used for network cables. Green = management then we are alternating cable colors for the gigabit ports going to each HP switch. all three cables are bundled using a different color tie down.
STH Fremont colo move in - racked cable rear.jpg

A few examples - the 2U is a Green, Blue. Blue bundle with black velcro tie downs. The 1U above it is Green, Red, White with Red velcro tie downs. The server above it is blue and black (no IPMI but we will have a console server that will get a green cable) with a green velcro tie down. For a single rack of servers, it makes life really easy knowing which cable goes where. It also means it is easy to trace which data cable is going to which switch.

And a quick front shot:
STH Fremont colo move in - racked cable front.jpg

Certainly plenty of physical room to grow. Current power usage is around 3A so there is also a lot of power headroom. Also, there is a missing Samsung 843T 480GB drive in the middle 1U server right now. That SSD is still plugged into my main workstation and I forgo to write it down on the checklist.

Network Configuration
I have been using pfsense at home for about six years and in the colo for over two. I got the IP address range from HE and plugged everything in but absolutely nothing trying to ping the outside world. After 3 hours of thinking my pfsense skills were even more meager than previously expected, I sent a ticket confirming the IP address range. We made a small change to stuff on 4/20 and I was desperate. It was an error on their end and they had fixed and responded in under 5 minutes. Interestingly this happened to us with the Las Vegas colo the first time around too.

The switches are likely to see a bit of swapping out. There is a Mikrotik CRS226 (24port GbE and 2 port SFP+ 10Gb) switch on the bottom of the rack and a Dell 8132 24 port 10Gbase-T switch also arriving today.

4/23 - Remote Configuration

Today all of the work is going to be done remotely. The datacenter is only about 20-30 minutes from my normal commute spots, but after 6 hours yesterday, I can stand to hear fewer fans!

Networking
Fixed an error in the VPN configuration. I had the wrong subnet selected in one of the VPN settings which was causing an issue.

Setting up the cluster
Decided to install Proxmox VE 3.4 on the Intel Atom C2750 1U machine. Both SanDisk CloudSpeed Eco 960GB drives were configured in ZFS RAID 1 mode so there is now about 900GB of storage accessible. I am going to use this for administration for the time being and to help setup the remainder of the nodes.

4/26 - New Shelf and Cluster

Had a few milestones here. From a physical perspective, we added a mid-rack shelf for holding a laptop, an Intel NUC and any smaller shelf mounted devices.

New hardware
Added an Intel NUC for a little in-rack management box
Added a dual Intel Xeon E5-2699 V3 system with 128GB of RAM, 2x Intel S3700 400GB drives, 2x Intel S3500 800GB drives and a Fusion-io ioDrive 160GB SLC card.
One of the dual Xeon E5 V2 systems has been misbehaving, it is now marked for replacement
New dual Xeon E5-2698 V3 system with 128GB of RAM, in testing getting ready for deployment. This is a 24-bay 2.5" chassis and already has a SAS 3108 controller onboard and 5x 15K.5 300GB Seagate SAS 12gbps drives installed. Next will likely be SAS SSDs to accompany the system.
STH Fremont - 2015-04-27 -small.jpg
Networking
Site-to-site VPN is now functional and is very nice! Has made administration significantly easier.

The Dell PowerConnect 8132 is a great switch but I am missing inner rails and they are almost impossible to find.

Setting up the cluster
I hit an error with proxmox. The easy answer is I need to re-install on two nodes that do not have IPMI on. This is non-trivial but will do the next time I am in Fremont.

4/29 More Setup

Networking
Site-to-Site is now connected between Vegas and Fremont now! Now from home I have site to site and from each DC there is site to site setup. Excellent!
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Very cool. Thanks for the write-up. And thanks for future proofing the site, I hope to come to this site/forums for years to come.
Thank you for the kind words.

I just updated the post with a few pictures and a lot of updated information.
 

Biren78

Active Member
Jan 16, 2013
550
94
28
That's a crazy amount of gear for that low idle. I guess haswell-ep can idle under 70w and those C275x machines are 30w if you're using SSDs and good PSUs. Switches and the E3 and you're at 3A. Good deal.

I never thought of it, but if you can fit 3x Atom C27xx with 32GB and 2 SSDs probably in 1A so your 12A can handle 36 low power machines. It'd make an awesome cluster dude.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Cool.

Is that in the HE facility on Warm Springs or the one on Mission Ct.?

I'm also interested in the monthly on that facility, though I'm sure HE probably doesn't want you publishing their pricing or whatever deal they happened to strike for you.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Is that in the HE facility on Warm Springs or the one on Mission Ct.?
Warm springs. This is basically their $700/mo deal with a few dollars off the first three months as a move-in incentive... but they make you pay for square holes. Oh well.
 

Hank C

Active Member
Jun 16, 2014
644
66
28
not bad for the price...but i think what the quote i got from LV colo has about the same value as yours
 

Guillermo Calvo

New Member
Jul 8, 2014
11
7
3
57
Chicago
www.datacenter1.com
I would suggest to put your network gear in the middle of the rack, is easier to work / inspect in this way and it save some money in network cables.
Another recommendation is to put server with the same deep together (deepest servers at the bottom of the rack) it will make your life easier when you have to swap servers

Good luck with your rack!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Yea I know. The worst part was when I got looked at funny for saying that was important. Quick rails = greatest thing ever.

@Hank C Las Vegas is much cheaper. But I needed something in the Silicon Valley so I could make more frequent trips. Mountain View was another $200/ mo!

Thank you @Guillermo Calvo I was originally thinking of doing the middle network thing, I will be replacing the switches soon most likely so that may happen yet. I am actually trying to keep the bottom part of the rack relatively free for product reviews.
 
  • Like
Reactions: Guillermo Calvo

awedio

Active Member
Feb 24, 2012
776
225
43
Warm springs. This is basically their $700/mo deal with a few dollars off the first three months as a move-in incentive... but they make you pay for square holes. Oh well.
In downtown LA (Wilshire corridor), that size rack is $800/mo with 40A of power
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I've been looking at HE or LV (which DC did you use?) or JOES for a cheap place to stick some boxes... Also checked out some others that quoted me $1400+ for a 1/2 rack 100mbit :X

How are you liking HE? Their prices seemed fair? Not much of a drive for me either.

We're in the midst of possibly upgrading our home from a single T1 to a 20mbit dedicated fiber connection through ATT, but not sure they pulled fiber far enough... ugh.
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
In downtown LA (Wilshire corridor), that size rack is $800/mo with 40A of power
Just for comparison, in my datacenter at work (onsite fully owned by work, not colo or anything) we run a pair of 3-phase 30A circuits to every 42U rack, so thats about 180A of total power run to each rack. Though following the standard rule of datacenter power you only ever put in enough equipment to draw 80% of 50% of the total so we have to keep the load under 72A per rack. The 80% of 50% thing is a combination of redundancy and keeping a safety margin so breakers won't blow - if a transformer upstream fails we could lose one of the feeds to the rack, and when the entire load fails over to a single feed we want to be at max 80% utilization of what is left.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
In downtown LA (Wilshire corridor), that size rack is $800/mo with 40A of power
Too bad I do not live in downtown LA anymore! I was just next to 7th and Fig for 3+ years. If I go twice a month it is cheaper to have local sadly, although I will miss the United miles.

On the Hyper-V side, two points. One is that I have been doing a bit of reading/ testing and I think I can shave another bit off STH response times by moving to KVM or certainly with docker. The second is that AD is somewhat heavy to run for what I need. The third is that I am a bit wary of spending yet more money for the next version. Finally, the dynamically expanding disk snapshots are not great in Hyper-V when you have Linux hosts. LVM snapshots are better at this. I really like Hyper-V and the ability to just download and run on my desktop/ notebook but it has been far from 0 maintenance even for the relatively simple bits I am using.

@T_Minus Fiberhub. They have grown a ton since I was first there. You might want to also try Dacentec on the east coast for inexpensive colo and Colounlimited in Dallas. I almost went with both for cheaper space/ power. Also if you are just looking for a few servers, it is really inexpensive to rent cheap-ish dedicated servers. That is what I would be doing but for a pile of hardware around.
 
  • Like
Reactions: sth and T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Too bad I do not live in downtown LA anymore! I was just next to 7th and Fig for 3+ years. If I go twice a month it is cheaper to have local sadly, although I will miss the United miles.

On the Hyper-V side, two points. One is that I have been doing a bit of reading/ testing and I think I can shave another bit off STH response times by moving to KVM or certainly with docker. The second is that AD is somewhat heavy to run for what I need. The third is that I am a bit wary of spending yet more money for the next version. Finally, the dynamically expanding disk snapshots are not great in Hyper-V when you have Linux hosts. LVM snapshots are better at this. I really like Hyper-V and the ability to just download and run on my desktop/ notebook but it has been far from 0 maintenance even for the relatively simple bits I am using.

@T_Minus Fiberhub. They have grown a ton since I was first there. You might want to also try Dacentec on the east coast for inexpensive colo and Colounlimited in Dallas. I almost went with both for cheaper space/ power. Also if you are just looking for a few servers, it is really inexpensive to rent cheap-ish dedicated servers. That is what I would be doing but for a pile of hardware around.
Thanks! I'll check out those data-centers.

I've been a 'renter' of hardware for >15 years and am making the move of some of those to owned :) The plan is to start with my own sites, and then work on clients as some are needing the upgrade from shared to dedicated thus the timing worked well too.