How do you plan fail over redundancy design for shared/VPS hosting ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

soamz

New Member
Aug 15, 2016
18
0
1
36
We are looking to offer shared hosting and VPS hosting both to our 1000+ clients who are ready to subscribe to us.
I know, no hosting company takes auto backup of each second for shared hosting clients or VPS clients without extra charge, but Im looking to do that for my clients.

so, how should I design it ?

I need a completely 100% disaster proof design, so that even if a server gets burnt or goes dead for any reason, we should be able to get the clients up and running with where they left immediately.
Im sure, there are ways to do this, just dont know what needs to be done.


Our architecture is,
we have dedicated super micro for shared clients.
we have dedicated super micro for VPS hosting.

Do we need one more server to act like a instant backup or clone image of the above 2 servers ?

So, that if someday the above servers die, then we can simply fire up the 3rd backup server simply by changing its IP address or something like that ?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I think before anyone can make suggestions on specifics we'd need to know the resource utilization of those 1000 users.

I've seen 'hosting re-sellers' cram 100s or maybe even up to 1000 of TINY 'shared' hosting clients on 1 single server, back in the day you could access dir lists on hosts who didn't know what they were doing (not access files) so you could see 100s upon 100s of active users/sites.

Are you only using 1 server for shared and 1 server for vps or are you saying those are the brands used for each?

Do you want raid1, raid6, etc... or are you looking for a 'backup archive' solution + HA + redundancy?
 
  • Like
Reactions: soamz

soamz

New Member
Aug 15, 2016
18
0
1
36
Yes, Im looking for 'backup archive' solution + HA + redundancy solution.

Im new to this, so looking to suggestions what should be my step before I hire a consultant to set it up. I dont want to hire someone, without knowing the basics of the plan which I want to execute.


And yes, I talked to a consultant today and he gave me this diagram, Screenshot
He said, go for Physical Server + Storage Server and hammer it with KVM + Openstack + CEPH and then simply start loading clients and as you grow, simply add more compute power and storage and thats all, easily scalable and nothing to change, everything happens on the fly.

Is his plan good enough ?
 

Markus

Member
Oct 25, 2015
78
19
8
You can also go for ProxMox All-In-One. There you have KVM and Ceph with a nice GUI. On top the support is not so expensive.

If you have > 10 customers with potentially critical applications on your stuff you want support...

Regards
Markus
 

soamz

New Member
Aug 15, 2016
18
0
1
36
@Markus, ProxMox better than Openstack ?

I was thinking to build one big private cloud with KVM + Openstack and use WHMCS + cPanel for customers management and billing, everything.
 

Markus

Member
Oct 25, 2015
78
19
8
As always: It depends.

Do already have those 1000 customers right now? If you want a built-in-panel than proxmox is not the correct soltion for you.
Do you have quite a good knowledge about Openstack?

Regards
Markus
 

soamz

New Member
Aug 15, 2016
18
0
1
36
My consultant is very very good at Openstack + KVM + cPanel + WHMCS.

And I think I got the idea for redundancy fail over, here it is, correct me if Im wrong.

Setup one private cloud using one Xeon 10 core server with 64GB RAM + (4 x 3TB SAS Drives) and add one SAN Storage server for fail over, meaning if the main xeon server fails some day, then the SAN storage SERVER will automatically take up or can be restored.

I think, 1000+ clients can easily run on the above and when we get VPS clients or more clients, I will simply add one more compute node of Xeon 10 core server with 64GB RAM and so on.

Sounds good ?
 

Markus

Member
Oct 25, 2015
78
19
8
Not really. I don't think that you can host 1000 more or less "big" clients on 1 fat rig.

Those parallel access to the ressources (images, databases and probably php-sites, python-stuff...) will mess up the whole thing.
Also there is no really automatic failover if you put "a SAN Storage server" in the game.

The infrastructure in you picture doesn't look so bad, but if you really have 1000 clients right now you have to buy more than one server.
Beside the small amount of IOPS (you want SSD-Cache for so many clients) the network connection will be used quite well. Where do you want to host your server? The connection between the servers (Compute / Storage) should be dedicated, so you need private VLANs and all the other stuff. You also want kind of firewall or D/DOS-Defense (at least one of you clients will "ask for an attack")...

What's about IP-Adresses. While you can handle some webpages / mails with a little amount of public IPs the server / vm customers want a unique one...

Regards
Markus
 
  • Like
Reactions: T_Minus

soamz

New Member
Aug 15, 2016
18
0
1
36
I have a friend who is running 1230v3 in house and running over 1300+ clients. Dont know how. He doesnt even use SSD, he is on all HDD.

Im not speaking I want to give a bad experience to my clients, by offering from a under powered server, but Im just trying to understand about power and delivery.

For hosting, 1000+ wordpress websites with moderate traffic, how much estimated traffic should pass through that server ?
I guess less than 100Mbps, right ?
 

soamz

New Member
Aug 15, 2016
18
0
1
36
This is 1 server for shared clients which Im planning - E5-1280v5, 64GB of RAM, 1 x 256 SSD (Boot / OS), 1 x 2TB Spinny (HDD Clients), 1 x 512GB (or 1TB) SSD (SSD Clients), 1 x 4TB on server backups = This can easily handle 1000+ clients for sure.

And Im the ISP and datacenter both myself, and I have more than 1Gbps of upload traffic lying unused. So, we can use that.


And if we go by the design, where Compute and Storage are different, then we can use cisco 3750 switch to connect both compute server + storage server so that it does perfect transfer rate.

Anything else, I missed to discuss ?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I have a friend who is running 1230v3 in house and running over 1300+ clients. Dont know how. He doesnt even use SSD, he is on all HDD.

Im not speaking I want to give a bad experience to my clients, by offering from a under powered server, but Im just trying to understand about power and delivery.

For hosting, 1000+ wordpress websites with moderate traffic, how much estimated traffic should pass through that server ?
I guess less than 100Mbps, right ?
It's impossible for us to guess without more specifics, and even then you likely have no idea what plugins people are using. Some plugins will do backups so imagine if you had 100 sites doing backups at once... not good, what about plugins that scan the database, not good... what about... etc, etc, way too many variables.

What is "moderate" traffic to you? It's likely different than me any someone else, etc... Is that 100 people a day? 1000? 10,000 or is it only 10?

With 1000 sites on 1 server the chance of something going wrong is VERY high and instead of affecting say 100 clients it is going to affect nearly ALL your clients. I would never put all my eggs in 1 basket even if it "Technically" was possible.

With your # of clients I would want to use at minimum 3 different host machines, if not 4. This way when something happens (not if) you are only affecting 20-30% of your clients not 50 to 100%.

Remember it's not just about 'fitting' it's also about hardware failure.... if a RAM stick dies, HDD/SSD dies you're going to have down time with only 1 host system... even 2 with 1000+ websites can't handle the load of 1000 likely, esp not if 20%+ are "doing" something that needs CPU, RAM, DISK on a tiny 4 core server.
 

soamz

New Member
Aug 15, 2016
18
0
1
36
I have bought a new /24 for starting my hosting business already and Im planning to put one Juniper SRX1500 in front of the main Huawei switch to which all the servers will be connected to.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
This is 1 server for shared clients which Im planning - E5-1280v5, 64GB of RAM, 1 x 256 SSD (Boot / OS), 1 x 2TB Spinny (HDD Clients), 1 x 512GB (or 1TB) SSD (SSD Clients), 1 x 4TB on server backups = This can easily handle 1000+ clients for sure.

And Im the ISP and datacenter both myself, and I have more than 1Gbps of upload traffic lying unused. So, we can use that.


And if we go by the design, where Compute and Storage are different, then we can use cisco 3750 switch to connect both compute server + storage server so that it does perfect transfer rate.

Anything else, I missed to discuss ?
This makes no sense at all to me ?

You are worried about redundancy yet you're wanting to use 1HDD 1SSD and cram everyone on 1 host.

This is the opposite of redundancy.

Having a "backup" copy of data is not redundancy to your clients it's a simple in case of an emergency we have your data still but when something goes wrong your site is going to be down for days until we order new parts to get it back online...


You need to have at-least 2x drives in each set, and you need at-least 2 systems to have any redundancy.

This is the problem with "cheap" hosting if you're buying to host "cheap" clients it's not cheap to provide even basic level of redundancy and actual 'quality' up time.

If you're not worried about churn, uptime, etc... then do it as cheap as possible. IF you care (and it seems you do) then it's going to cost a lot more up front.

Maybe even look into a 4 node 1366 system, ultra cheap.
 

soamz

New Member
Aug 15, 2016
18
0
1
36
Okay, you may suggest me, as Im only reading things, not yet tried this in realtime.
So, my bad!

So, you suggest to get two of this specs ?

E5-1280v5, 64GB of RAM, 1 x 256 SSD (Boot / OS), 1 x 2TB Spinny (HDD Clients), 1 x 512GB (or 1TB) SSD (SSD Clients), 1 x 4TB on server backups


So, one acts like active and the 2nd as slave backup, but connected to the active server over a switch so that it instantly is auto backed synced to the active server, which means, if active server fails, then we can simply get this 2nd slave server live ?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Okay, you may suggest me, as Im only reading things, not yet tried this in realtime.
So, my bad!

So, you suggest to get two of this specs ?

E5-1280v5, 64GB of RAM, 1 x 256 SSD (Boot / OS), 1 x 2TB Spinny (HDD Clients), 1 x 512GB (or 1TB) SSD (SSD Clients), 1 x 4TB on server backups


So, one acts like active and the 2nd as slave backup, but connected to the active server over a switch so that it instantly is auto backed synced to the active server, which means, if active server fails, then we can simply get this 2nd slave server live ?
That's the "cheapest" way to do it, and for 1000 clients would be a huge pain.

I would change the specs so that you have 2x 256GB Boot (mirrored/rAID1), 2x 2TB Spinny (may as well go to 4TB for the $$ well worth it for hosting), 2x 512GB (or 2x1TB) SSD, and your backups should be 2x4TB.

I would RAID1 all of those on the LIVE and BACKUP system.

But, again this wouldn't be an ideal setup for 1000 clients. I wouldn't trust 1 server to 1000 clients.
But this at-least offers you some type of redundancy in case of failure.

If you're going to have 2 identical systems like that with 1000 clients and that's all you can do then I would run 500 clients on each, and backup to each other.

Again, not an ideal setup but could work.

Are you in the USA? Can you get shipments easily? For instance I have AMD 4 Node system for sale here that has 16 Cores per-node and can handle lots of RAM per-node. DDR3 is dirt cheap, and the entire 4 node minus RAM and HDD/SSD is around $1,000 USD. You could use 4 Node like that for hosting clients and then get a 2nd identical one for storage. How you configure the nodes is up to you but a basic setup could be 3 active nodes with 1 spare in each chassis. That would spread everything around, and still be really cheap overall, and have room to grow. I looked into doing this in the USA but then remembered how much I hate dealing with clients paying $5-$20/mo that think you owe them the world ;)
 

soamz

New Member
Aug 15, 2016
18
0
1
36
Okay understood, so you suggest two of this servers (one live and one backup insta clone image) :

E5-1280v5, 64GB of RAM, 2x 256GB Boot (mirrored/rAID1), 2x 4TB Spinny, 2x1TB SSD, and backups should be 2x4TB spinny again.
RAID1 all of this.


We offer the clients 5$ for 1GB disk space and 7$ for 1GB SSD disk space.
And 90% of clients will take this package only.
You still think, we need so much of storage in this server ?

Its like, 8TB DD , 2TB SSD, which is total like 10TB = 10000GB disk space.

And you say, better to stay within 500+ clients in this above setup and as you grow, add more servers ?
 

soamz

New Member
Aug 15, 2016
18
0
1
36
If I ask, for an ideal setup to host like 1st 500+ clients, what would you suggest ?

Imagine, HDD + SSD shared hosting clients not paying more than 5$ a month.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I would do a proxmox cluster it's what I'm currently looking at doing for my business, and for my hosting business (business hosting $100/mo minimum, most at $400+).

You don't want to fill your SSD or HDD >75% anyway

I would also look at the E5-2670 v1 8 Cores vs 4 (lower freq.) but also only 50 bucks :)
 
Last edited:

soamz

New Member
Aug 15, 2016
18
0
1
36
Im confused.

You prefer to go for server wise for client wise or build one single private cloud with multiple servers stitched into one cluster and do everything with virtual instances right from there ?