Is STH running on premium bandwidth or volume bandwidth?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

uberguru

Member
Jun 7, 2013
319
18
18
Ok i have been pondering on whether to go with premium bandwidth or volume bandwidth with leaseweb All about network at LeaseWeb

Now i just have a question. What is STH running on? is it on premium bandwidth or volume bandwidth? You can check IP Transit & Transport Services | Fiberhub for some information.

Why am asking this is because i really want to understand how slower or how more unreliable volume network is really is. I mean as long as visitors can access the website, is there really anything else?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I can't answer your question about STH, but I can share my experiences with "premium" bandwidth. Very many years ago I co-founded a very early SaaS company called Promptu. Our corporate-focused, web-based software had 400K users world wide, and we paid a great deal of attention to making sure that each paying seat was able to access the system with reasonable speed. We used Keynote and Gomez to monitor uptime and access speed from several hundred locations around the world across many different networks, generating tens or hundreds of thousands of data points per day.

At first, our two racks of colo servers at Equinix used a redundant connection to the Internet via a well-known network provider. We had a nominal 100% "uptime" for the app cluster, by which I mean that the app kept running and the internet connection was always up, but if you looked at the percentage of user requests that completed successfully and quickly, the world-wide "apparent" uptime was more like 99.3%**. We then switched to ultra-premium bandwidth via InterNap, which connected us to a whole bunch of different networks with some fancy routing software managing everything. That bandwidth plus a number of other changes lifted the transaction success rate to more like 99.95% if I remember correctly.

So the switch from "normal" to "premium" bandwidth might not be noticeable unless you are evaluating it from an external point of view.


**The normal symptom was that a particular network in a specific part of the world would go pear-shaped for a few minutes to a few hours and then recover. That part of the world would see our app as being "down" for that period of time, even though it was actually running just fine. The ultra-premium bandwidth was much much better at avoiding these situations.

Ok i have been pondering on whether to go with premium bandwidth or volume bandwidth with leaseweb All about network at LeaseWeb

Now i just have a question. What is STH running on? is it on premium bandwidth or volume bandwidth? You can check IP Transit & Transport Services | Fiberhub for some information.

Why am asking this is because i really want to understand how slower or how more unreliable volume network is really is. I mean as long as visitors can access the website, is there really anything else?
 

uberguru

Member
Jun 7, 2013
319
18
18
I can't answer your question about STH, but I can share my experiences with "premium" bandwidth. Very many years ago I co-founded a very early SaaS company called Promptu. Our corporate-focused, web-based software had 400K users world wide, and we paid a great deal of attention to making sure that each paying seat was able to access the system with reasonable speed. We used Keynote and Gomez to monitor uptime and access speed from several hundred locations around the world across many different networks, generating tens or hundreds of thousands of data points per day.

At first, our two racks of colo servers at Equinix used a redundant connection to the Internet via a well-known network provider. We had a nominal 100% "uptime" for the app cluster, by which I mean that the app kept running and the internet connection was always up, but if you looked at the percentage of user requests that completed successfully and quickly, the world-wide "apparent" uptime was more like 99.3%**. We then switched to ultra-premium bandwidth via InterNap, which connected us to a whole bunch of different networks with some fancy routing software managing everything. That bandwidth plus a number of other changes lifted the transaction success rate to more like 99.95% if I remember correctly.

So the switch from "normal" to "premium" bandwidth might not be noticeable unless you are evaluating it from an external point of view.


**The normal symptom was that a particular network in a specific part of the world would go pear-shaped for a few minutes to a few hours and then recover. That part of the world would see our app as being "down" for that period of time, even though it was actually running just fine. The ultra-premium bandwidth was much much better at avoiding these situations.
Thanks for your response.
Actually i have spent a considerable amount of time seriously to actually think if i really need the premium bandwidth because it is like double the price at leaseweb at least. The thing is i am only going to be running websites/forums/blogs running PHP/python/MySQL on apache/nginx web servers so just the typical combos. Some websites will have videos so there will be streaming and some websites will offer downloads.
So i am thinking very seriously if i really need premium bandwidth. I am sure my bandwidth need will get big and the cost will factor in but i am also willing to go for premium if there will be a significant advantage for me else i will like to settle for volume bandwidth.

One more question i have is most service providers/datacenters use volume network anyways and most websites in that manner run on volume network so am thinking volume network should be adequate. But please let me know if my logic is wrong.

Thanks.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,801
113
A few quick thoughts:

STH does not use "premium" providers. Just the data center's standard mix.
Moving to the colocation cut page load times significantly, primarily because both the forums and the main site have significant excess in CPU, RAM (everything can be cached in RAM), inbound/ outbound bandwidth and now disk writes hit SSDs instead of EBS data stores. Quite shocking how much this helped even over the Amazon medium instance.
We did experiments, and with the site we are actually getting faster response times to over 90% of our visitors than with CloudFlare being enabled.
If I were doing downloads and video streaming, I would probably look to use a CDN. That is a very typical architecture at this point.

Could it be faster? Certainly, but for this type of application, it costs a lot more and adds more complexity for very little in return.

BTW - dba - The main site and forums are in the >99.7% range on all transactions right now.
 

Toddh

Member
Jan 30, 2013
122
10
18
I think the question you should be asking is what are the requirement for your apps? For websites and forums I don't think you would see a difference in the provider, the difference in most apps between 40ms and 60ms( or even 80ms) is not noticeable to the end users. Like Patrick mentioned you would get better results putting that money into hardware. The hardware and software you run your systems on will make a larger difference in the customer experience, outages and down time.

We host with Internap who has redundancy at every level, power, interconnects, AC cooling etc. We have some clients that require the highest level of service. We have been with Internap at this data center for over 10 yrs and have Never experienced and outage. We do get notifications of "sub optimal routing" when an interconnect is experiencing an issue but even those are unnoticeable.

Prior to Internap we hosted at several smaller data centers and we did experience some outages. At that time our clients were pretty understanding.

One last thing. Providers publish numbers like 99.99% network availability. We currently have a client who's application has been running at 99.97% over the past year. They had a NIC fail in a server a couple months back and they were down for about 40 min. They also have to do maintenance reboots for Windows updates monthly. They want better availabliity - 99.99% up-time equates to 4.5 min downtime per month. So we gave them pricing on a clustered environment to improve there up-time. We are comfortable with Internap being able to give us 100% up-time, all the changes we suggested were in the customers hardware and network connectivity.


.




.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
How are you measuring the 99.7%? Probably in some detail, but I want to share an example for others who might not be quite so sophisticated.

I recently helped a company, as a favor really, to improve their web site a little bit. One of the first things I did was to do gather some basic metrics. Prior to making any improvements, New Relic showed a 0.31% transaction error rate, which some would call 99.69% success rate, which seemed pretty good to them. When I looked at the user experience from an external perspective, only 91% of all transactions were completed, and completed quickly enough to be considered "successful". Average load time in two of their big markets, Taiwan and Brazil, was 6x longer than the US and UK - lengthy enough that those users certainly thought the site was "broken".

A few quick thoughts:

STH does not use "premium" providers. Just the data center's standard mix.
Moving to the colocation cut page load times significantly, primarily because both the forums and the main site have significant excess in CPU, RAM (everything can be cached in RAM), inbound/ outbound bandwidth and now disk writes hit SSDs instead of EBS data stores. Quite shocking how much this helped even over the Amazon medium instance.
We did experiments, and with the site we are actually getting faster response times to over 90% of our visitors than with CloudFlare being enabled.
If I were doing downloads and video streaming, I would probably look to use a CDN. That is a very typical architecture at this point.

Could it be faster? Certainly, but for this type of application, it costs a lot more and adds more complexity for very little in return.

BTW - dba - The main site and forums are in the >99.7% range on all transactions right now.
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
Long time lurker. First post but hope to be more active now.

Also saw the IPv4 thread. I would really want to stress at this point that you should spend serious time working in a cloud lab. You can use Amazon, HP, Azure, or others to learn most of the answers you are asking. I would do this just to be confident and blueprint before deployment.

Remember with colo you are responsible for everything. If you are asking all of these questions, what happens when a site gets hacked or a machine fails? A lot of planning goes into that. I don't think STH needed a colo, but they overbuilt like crazy from what it sounds like and they have an admin they pay to maintain stuff (plus folks here seem to be somewhat technically inclined.)

Just a caution, your questions seem like they are from early in the process not in the about to deploy.

More on topic, why not go with lower cost then upgrade if necessary? Most providers will make it easy for you to pay more but make it harder to go the other way.
 

Toddh

Member
Jan 30, 2013
122
10
18
dba you are right on about the end user experiencing different numbers than our reporting. We monitor the availability of the host OS and services, basically that it is functioning and reachable by the public. From our standpoint as the provider that is what we are responsible for.


.
 

uberguru

Member
Jun 7, 2013
319
18
18
Long time lurker. First post but hope to be more active now.

Also saw the IPv4 thread. I would really want to stress at this point that you should spend serious time working in a cloud lab. You can use Amazon, HP, Azure, or others to learn most of the answers you are asking. I would do this just to be confident and blueprint before deployment.

Remember with colo you are responsible for everything. If you are asking all of these questions, what happens when a site gets hacked or a machine fails? A lot of planning goes into that. I don't think STH needed a colo, but they overbuilt like crazy from what it sounds like and they have an admin they pay to maintain stuff (plus folks here seem to be somewhat technically inclined.)

Just a caution, your questions seem like they are from early in the process not in the about to deploy.

More on topic, why not go with lower cost then upgrade if necessary? Most providers will make it easy for you to pay more but make it harder to go the other way.
Yes you are somewhat right about spending time in a cloud lab..lol...but the thing is i know i am not an expert yet but i can tell you i am improving day by day...and to add i have been administering my servers, VPS and dedicated for about 4 years now with root access and all. So the only problem i have is really when it comes to networking which i needed because now i have colocation which is a bit different from VPS and dedicated where things are setup for you.

The way i learn faster is not only by finding out things or reading things...but also by asking questions especially from people doing what i need to learn...that to me is the most important part for me at least...and its been working fine. Sometimes i ask questions not because i have no idea but just to get people's take and learn a few things in the process.

So yeah i am not that bad at all....just the networking part and i am working on that and i guarantee you in the next 2 months...i would have known most of the things i need to know....not all obviously.
 

uberguru

Member
Jun 7, 2013
319
18
18
How are you measuring the 99.7%? Probably in some detail, but I want to share an example for others who might not be quite so sophisticated.

I recently helped a company, as a favor really, to improve their web site a little bit. One of the first things I did was to do gather some basic metrics. Prior to making any improvements, New Relic showed a 0.31% transaction error rate, which some would call 99.69% success rate, which seemed pretty good to them. When I looked at the user experience from an external perspective, only 91% of all transactions were completed, and completed quickly enough to be considered "successful". Average load time in two of their big markets, Taiwan and Brazil, was 6x longer than the US and UK - lengthy enough that those users certainly thought the site was "broken".


There you go...you just mentioned new relic...i have never really used monitoring especially error rate? never even heard of that before to be sincere with you. The most i do is measure uptime using pingdom and that is really helpful because sometimes i may be having dinner and i get the message my site is down and i just work on how to fix what happened. So yeah if you know of more web app monitoring or any other thing i should be aware of please let me know.

Thanks.
 

uberguru

Member
Jun 7, 2013
319
18
18
I think the question you should be asking is what are the requirement for your apps? For websites and forums I don't think you would see a difference in the provider, the difference in most apps between 40ms and 60ms( or even 80ms) is not noticeable to the end users. Like Patrick mentioned you would get better results putting that money into hardware. The hardware and software you run your systems on will make a larger difference in the customer experience, outages and down time.

We host with Internap who has redundancy at every level, power, interconnects, AC cooling etc. We have some clients that require the highest level of service. We have been with Internap at this data center for over 10 yrs and have Never experienced and outage. We do get notifications of "sub optimal routing" when an interconnect is experiencing an issue but even those are unnoticeable.

Prior to Internap we hosted at several smaller data centers and we did experience some outages. At that time our clients were pretty understanding.

One last thing. Providers publish numbers like 99.99% network availability. We currently have a client who's application has been running at 99.97% over the past year. They had a NIC fail in a server a couple months back and they were down for about 40 min. They also have to do maintenance reboots for Windows updates monthly. They want better availabliity - 99.99% up-time equates to 4.5 min downtime per month. So we gave them pricing on a clustered environment to improve there up-time. We are comfortable with Internap being able to give us 100% up-time, all the changes we suggested were in the customers hardware and network connectivity.


.




.

I am assuming you are using colocation at internap?

But here is what i am concerned about. I will basically be running typical websites with content/forums with some education niche...some client website that contain videos...lots of videos...several websites with downloads....mostly 0.5MB or less...but thousands of them....so yea those are the typical websites i will be running...i mean am currently running those at the moment on a powerful VPS...but i am trying to aggressively expand and i know i need (not want) a colocation. Just agree with me i need a colocation..lol..anyways so my main issue is i do not care much about 100% network...seriously i could care less because i am not offering business service to anyone...so 99.5% is not even bad to me. If premium is about 100% and that is the only advantage over budget then i certainly do not need premium. But ofcourse that is not what the only difference is. My problem is i want to know why i need premium.

No wi mentioned leaseweb specifically because i will be going with them guaranteed. And they have 3Tbit network...so i am thinking to myself..even their volume is probably better than some provicers' premium? And looking at the list of their upstream providers...they are mostly tier1 All about network at LeaseWeb

which makes me ask maybe i should just opt for volume. I mean even STH i can assume is on volume or maybe i should say budget network (am i right Patrick?)...so if STH is on budget network...why do i need premium?..lol

Now before i end this post...if premium is about having much faster respond time and lowest latency till even showing about 50% better speed than volume and also volume is showing lags several days in a month and the websites just act very slow occasionally...then i know i do not want volume. So yeah again remember i am talking leaseweb here...so will that be how their volume is? I assume not.

So i hope i explained what i want correctly.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,801
113
We just use FH's standard blend. Actually faster than I had expected.

If this was an online game platform, VoIP platform or the like, I would probably have different criteria.

New Relic is awesome! Actually helped me troubleshoot something recently that otherwise would have been painful.

Have response time stats too. Fairly awesome that the average full page load times are now less than 1/7th the pre-colo load times.
 

uberguru

Member
Jun 7, 2013
319
18
18
We just use FH's standard blend. Actually faster than I had expected.

If this was an online game platform, VoIP platform or the like, I would probably have different criteria.

New Relic is awesome! Actually helped me troubleshoot something recently that otherwise would have been painful.

Have response time stats too. Fairly awesome that the average full page load times are now less than 1/7th the pre-colo load times.
Are you using the free version? Or is the free version enough? Can the free version be used on multiple servers? VMs?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,513
5,801
113
Currently yes. I do want their up-level version. Server monitoring is "free" but the rich website monitoring costs more. I think we have a thread on this and alternatives (on phone so harder to search.)
 

uberguru

Member
Jun 7, 2013
319
18
18
Currently yes. I do want their up-level version. Server monitoring is "free" but the rich website monitoring costs more. I think we have a thread on this and alternatives (on phone so harder to search.)
If you get a chance please post link of the thread...thanks
 

uberguru

Member
Jun 7, 2013
319
18
18
Currently yes. I do want their up-level version. Server monitoring is "free" but the rich website monitoring costs more. I think we have a thread on this and alternatives (on phone so harder to search.)
Are there open source alternatives here? And the new relic free version is it self hosted or on new relic server?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Are there open source alternatives here? And the new relic free version is it self hosted or on new relic server?
New Relic is hosted by New Relic. The free version is limited to basic server monitoring, not application monitoring. You can also compare Nagios, Munin, Ganglia, NetXMS, OpenNMS, Pandora FMS, Shinkin, and a few other open-source products designed to address a similar need.
 

uberguru

Member
Jun 7, 2013
319
18
18
New Relic is hosted by New Relic. The free version is limited to basic server monitoring, not application monitoring. You can also compare Nagios, Munin, Ganglia, NetXMS, OpenNMS, Pandora FMS, Shinkin, and a few other open-source products designed to address a similar need. For basic monitoring of a single server, New Relic is very appealing - easy to install and easy to use.
Alright thanks
 
Last edited by a moderator: