Pardon our dust! Jan 2015

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Hi STH community,

I just wanted to give everyone a quick heads-up that we may be experiencing a small bit of downtime.

What is happening?
We are moving from our current 1/4 cabinet of colocation excellence to a 1/2 cabinet with more power, more bandwidth and more space.

During the move, we are combining some of the existing hardware with new hardware and that will necessitate physically moving boxes and a short period of downtime. The other options cost significantly more and this is simply one cabinet to another so it should not take "too" long. (fingers crossed.)

When
I got the e-mail that they are working on this over the next few days. So it will likely happen soon.

More details
After the "great crash" of 2014 the architecture needed to change (significantly.) We are moving to much higher end SSDs from Intel (S3500 and S3700), SanDisk (Lightning SLC drives and Optimus), Seagate (Pulsar.2) all in RAID 1 arrays with hot spares everywhere.

We are adding another backup node in the form of a 4U server that will have multiple RAID 1 volumes and hot spares. Currently this is setup as a ZFS-based backup solution, but we do have SLC and MLC SAS drives available for use.

We are adding more nodes/ chassis to provide a small lab environment, additional test nodes and etc. The great crash seems to have been due to a power issue in a 4-in-1 chassis while we were using 3 of the 6 total nodes we have in the colocation during setup of the second chassis. We are moving to more chassis so we never have to do maintenance where we have all of our eggs in one basket again. Lesson learned.

10/40GbE is being added to help cope with the extra traffic (1GbE will be there just in case too.)

Conclusion
The bottom line is that this just needs to get done. Maintenance on the current setup leaves us with only one functional chassis and I want to get our setup to something that has room to breathe. Costs are relatively low so it made sense.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Pat will you report what kind of issues, if any you, see using ZFS as a backup medium on spindle disks and how fragmentation affects you?

thanks for investing in this site a bunch too, this has become a very valuable resource
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Yes - it is. Same datacenter, larger space with more power. Just moving a few aisles over. That is why we are just going to do a lift and shift strategy.
 
  • Like
Reactions: rnavarro

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Significantly slower than I wanted, but the transition is scheduled for Friday April 10. A short period of downtime will occur that afternoon.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Ok looks like this is about to happen. Expect that we will be losing connectivity shortly.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Slowly getting there. @eva2000 the forums' centminmod firewall decided it no longer liked us!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Somewhat. There is one of the main VM hosts that is not up. Very annoying.
 

eva2000

Active Member
Apr 15, 2013
244
49
28
Brisbane, Australia
centminmod.com