DC Will Only Provide 40A / 120V Per Cabinet? Typical?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

cmoski

New Member
Jul 7, 2020
2
1
1
Hey all -- wanted to ask a question of fellow STHers, as I'm sure that you all have much more experience in colocation than I do :)

A local datacenter in WA that I prior to now (otherwise) liked has informed me that they would not exceed 40A per cabinet.

My thinking was something along these lines -- if I were to colo two SuperMicro Twins, which could probably exceed that at full load in only 4U, they are suggesting that I'd need to purchase a full cabinet to do so. There would be ~36U of entirely wasted space.

The DC runs about $1400/mo per cabinet, and 40A / 120V plus 1gbps up/down. They don't have any other power option available.

The concern they cited was heat dissipation capability. STH, am I yelling into the void at the laws of physics, or is this fairly typical practice?
 
Last edited:

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
It totally depends on how they built up the data room - they need to have the cooling, power distribution and UPS / generator capacity to handle the load. They could have taken over an older facility that isn't designed for higher power density, or the room may be near the design limit and they don't have any capacity to provide to you.

Only providing 120V sounds like an old design or the 208+ side of the power system is already at capacity. If they explicitly mentioned they don't have the cooling for it, then it is totally possible.

Ask if they have other rooms with more available capacity, or shop around and you can find facilities that can provide more power density.

If you want over 5 kVA, it is a waste to do 120V - step up to 208/240.
 
  • Like
Reactions: Patrick and cmoski

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
That is not overly crazy. There are a lot of data centers that target ~5kW/ rack for cooling. You can see a lot of data centers with a firewall/ switch and a 4U GPU box.

We have systems that are single node 2-5kW already today, and that is going up. Next-gen GPUs will go beyond today's 400-500W and go 600W with 800W soon as well.

If you read the recent Tyan 1P AMD EPYC review, that is a big reason that I was discussing density and 1P there. This generation of CPUs will often be 200-280W, but the next generations are going up as well. The question is when will liquid happen and will it be immersion or water blocks, not if. Then there will be the existing infrastructure that will not scale-up to the new power/ cooling requirements.

Just to give you a sense, 5 years ago, the STH lab was 10kW of capacity, not load. Right now, as I am typing this, we are running at around 35kW of load. We have 4 systems using >10kW of that between them.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,063
1,483
113
Not surprised. Cooling often tends to be the limiting factor. The last datacenter I worked in was quite modern and had 70kW capacity per rack (dual 60A 415V 3P drops), but that could only be sustained if outdoor temperatures were below 35C (often exceeded in Phoenix) as the cooling capacity was only ~30kW per rack. It's likely not economically viable to refit the HVAC and power infrastructure, which is why they're not able to offer more. With ~100 racks, we could pull ~10MW, which is similar to that of a small city.