BackBlaze B2 real life stories

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Hi,
I am considering trying BackBlaze B2 for storing my veeam compressed backup files, i.e. multi gigabyte files - some bigger than 150GB.

So I wonder if anyone here have experience with B2 and can provide real statistics of the following sort:

  • Average size of files backed up
  • Total size of files backed up
  • Schedule of backup
  • Price you pay/month/year whatever

Its the pricing part I am mostly curious about since its very hard to do a real calculation, since its based on transactions + total storage - and who knows how many transactions will be generated for the files I have in my nas.

At the moment I am using rsync.net - which is great, except the upload speed is kind of slow - I only get around 15MB/s at best and some times as low as 5-6MB/s - and since my data is changing 100% every week, i.e. my veeam generate artificial full backup files every week, I have the "pleasure" of deleting basically everything and re uploading everything once a week - so I was hoping b2 would have more bandwidth for me.

Thanks in advance.
 
  • Like
Reactions: Blinky 42

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
If your backups are basically a few large files per day that you just push up to cloud storage then the calculation shouldn't be too bad.
Are you trying to do an incremental backup vs what is in the previous cloud archive copies, or are all deltas computed locally and just the final big file uploaded at the end?
I put 60-70G/day into wasabi just for DB backups, and since we want to keep them around for several months the 90 day minimum on wasabi isn't a problem. Once account has 17T in it and is ~$70/mo
When I started I compared b2 and wasabi, and wasabi was a bit cheaper for our use case. Looking at the pricing tables on both sites today, I think as long as you don't make thousands of requests per month then the pricing would be on-par (neither site mentions the other which is a good sign they are on-par price wise ;) )
I am also curious what people see with b2 for comparison. This morning's 60G backup uploaded in 31min from Amazon' us-east-2 (Ohio) to Wasabi's us-east-1 (N. Virginia)
 
  • Like
Reactions: Patrick

Marjan

New Member
Nov 6, 2016
25
4
3
My use case is different than yours. I am backing up my personal files, there are no big files at all. Files are backed from TrueNAS to Backblaze, so I am using B2Cloud. For now no files are deleted. Probably in the future but not for now. Files are added irregularly, only few or quite a lot, once a month or nothing for months...
All tat being said, some info:
  • average size of files between 3 and 6 MB
  • total size is around 240GB
  • daily schedule
  • I pay monthly around 1.4$ (for now)
In B2Cloud, there are transactions that are free and that are not free. For the ones that are not free, there is certain daily or monthly amount that is free after which something must be payed. I never hit those limits, so for now I just pay for storage itself.
Good thing about B2Cloud is that you pay per GB, minimum amount that is payed is 0.5$. If your monthly bill is like 0.1$ after 5 months you pay 0.5$. Your credit card isn't charged until then. But this is just for few tens of GB, not many people will be in this situation.

This is not so useful for your use case but someone else might find it useful.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Are you trying to do an incremental backup vs what is in the previous cloud archive copies, or are all deltas computed locally and just the final big file uploaded at the end?
Yeah - daily I upload incremental files that are 3-8GB in size - once a week - all the incrementals are "merged" into one "full" backup, that is then uploaded and all the incremental files are removed - these files are 13-150GB in sizes - so its multi-gig files - and few of them.

of course deleting everything once a week and uploading everything again is not "efficient" - but that is how the backup program veeam does it - I could just do incremental forever, but that would mean slower restore speeds in case something had to be restored.

Shame - so few is using b2? Or unwilling to share stories.

I still have upload issues to rsync.net - and even though their support have tried to help me - no matter what I have done it will not help - I am simply too far away from their servers in terms of hops - so flow control or traffic shaping along the way "****s" me up. Changing flow control algoritm changed nothing as far as I can see
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
That type of incremental + regular full backups is quite common - we were doing it with tape for decades and it works well to balance total data written (tapes used / cloud $ consumed) vs files needed to do a full restore.
If you can spool the backup files to a local disk and then push them up to b2 / wasabi / s3 / azure / google cloud you can swap as needed for experimentation. Also if you just look across a week or 3 of file sizes in the local spool or in your rsync.net storage you can estimate the amount of storage you would need with each service and the resulting cost.
It sounds like you are under the $20/mo range regardless with your total volume of data so pretty inexpensive to experiment.

You may also need to investigate your connection, or the path from your servers to rsync.net and see if the problem is there. Might be as simple as adjusting the time you do your backups to avoid congested transit links or local issues. Keep one of the full backup files around and time uploading it to a few services as a quick sanity check if any performs racially better than the others.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
If you can spool the backup files to a local disk and then push them up to b2 / wasabi / s3 / azure / google cloud
They already reside on my backup server where veeam pushes the backup files to, so yeah good suggestion.

You may also need to investigate your connection, or the path from your servers to rsync.net and see if the problem is there. Might be as simple as adjusting the time you do your backups to avoid congested transit links or local issues. Keep one of the full backup files around and time uploading it to a few services as a quick sanity check if any performs racially better than the others.
I have worked with rsync support on this by tweaking my tcp settings on the server, both so its using a better congestion algoritm for fast networks with slow response time. And other settings. The reason why its going so bad in my opinion is that its 10 hops to send data not very far and a lot of both congestion+traffic shaping can go on in so many hops. It is quite possible that it is just how it is when you send data out on the internet in big blobs far away. It is not the end of the world that it takes 5+ hours to upload around 200GB in 3 files or so, but it would be nice if I could actually use my 1gbit internet connection for something instead of just getting 1/10th of the performance I can do.

I have also changed my upload time several times and some times it changes things, but mostly it is the same. But I will try that again, since I have changed to a better source server and change congestion algoritm.

costing around $8 a month
That is decent - B2's calculator is a bit strange - I cannot have an initial upload of 200GB, upload 1TB during the month and then delete 1TB - I am not allowed to delete more than the initial upload. But it seems like I would possibly be even less than that on B2 that is good to know - I will try to push up a "full" backup file and see how well the network behaves.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Just did a traceroute to: s3.eu-central-003.backblazeb2.com - where my b2 bucket would be stored - and I have 9 hops, so not much better than rcync.net - but latency is almost half of rync.net - so that might mean I can push more data, since latency probably mostly is caused by distance - and the latency I get to backblaze looks like its located somewhere in northern europe, probably amsterdam.

Code:
traceroute -P icmp s3.eu-central-003.backblazeb2.com
traceroute to s3.eu-central-003.backblazeb2.com (45.11.37.254), 64 hops max, 48 byte packets
 1  gw (192.168.0.1)  0.110 ms  0.145 ms  0.127 ms
 2  gc-edge1.gigabit.dk (*.*.*.*)  1.118 ms  0.787 ms  0.945 ms
 3  te0-1-1-5.rcr21.cph01.atlas.cogentco.com (149.6.137.49)  1.488 ms  1.487 ms  1.691 ms
 4  be2496.ccr41.ham01.atlas.cogentco.com (154.54.61.221)  6.331 ms  6.710 ms  6.898 ms
 5  be2815.ccr41.ams03.atlas.cogentco.com (154.54.38.205)  14.346 ms  14.163 ms  14.698 ms
 6  be2550.agr31.ams03.atlas.cogentco.com (154.54.56.14)  14.347 ms  14.253 ms  14.394 ms
 7  be2049.nr51.b021908-0.ams03.atlas.cogentco.com (154.25.10.218)  14.945 ms  15.082 ms  15.118 ms
 8  unwired.demarc.cogentco.com (149.14.141.250)  14.617 ms  14.913 ms  15.696 ms
 9  45.11.37.254 (45.11.37.254)  14.738 ms  14.484 ms  14.885 ms
vs

Code:
traceroute -P icmp prio.ch-s011.rsync.net
traceroute to prio.ch-s011.rsync.net (77.109.148.21), 64 hops max, 48 byte packets
 1  gw (192.168.0.1)  0.118 ms  0.094 ms  0.058 ms
 2  gc-edge1.gigabit.dk (*.*.*.*)  0.712 ms  1.237 ms  0.926 ms
 3  gigabit-aps.10gigabitethernet1-2.core1.cph1.he.net (216.66.83.102)  1.121 ms  2.140 ms  2.292 ms
 4  10ge0-2.core2.cph1.he.net (216.66.83.101)  3.446 ms  3.025 ms  1.423 ms
 5  init7.dix.dk (192.38.7.52)  24.873 ms  1.272 ms  0.927 ms
 6  r1ams2.core.init7.net (5.180.132.182)  47.847 ms  49.051 ms  48.299 ms
 7  r2fra2.core.init7.net (5.180.132.222)  55.352 ms  54.604 ms  54.387 ms
 8  r1fra2.core.init7.net (5.180.132.221)  54.526 ms  54.129 ms  54.295 ms
 9  r1fra3.core.init7.net (5.180.132.219)  54.113 ms  55.548 ms  55.286 ms
10  r2zrh2.core.init7.net (5.180.132.212)  49.826 ms  49.910 ms  49.984 ms
11  77-109-148-21.fiber7.init7.net (77.109.148.21)  26.541 ms  26.542 ms  26.710 ms
I think init7's network just "suck big hairy donkey balls" or they do crazy traffic shaping, looking the the trace route - or else they just on purpose respond slowly to icmp - anyway, its looking decent in terms of where they b2 have their servers located.

But lets see - I might not get higher speed going to B2 even though it is much closer.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Just tried to sync my single 150GB backup file and results are much better:
Code:
Transferred:      143.288G / 143.288 GBytes, 100%, 37.573 MBytes/s, ETA 0s
Transferred:            2 / 2, 100%
Elapsed time:     1h5m6.0s
Same to rsync.net takes me 3 hours at least on a good day and more on a bad day.

So I might give b2 a go.

I might be able to get somewhat similar results with rsync.net - but it would require that I use rclone and pipe through a "chunker" remote and then finally to rsync.net.

Since standard sftp rclone is not chunking the big files which is really needed to be able to parallelize.

My normal rsync just finished:
Code:
total size is 198,760,039,193 speedup is 1.19
2021-03-28 14:07:13 rsync finished took 4 hours, 7 minutes, 13 seconds
So needless to say I have better connection to b2.

And I just want to say - this is no fault of rsync.net - its just a consequence of where I am located, the route and because I am using rsync to upload huge files - the b2 upload used rclone with chunking and 8 parallel streams. So naturally it is faster - I tried doing parallel rsync to rsync.net but it was not faster - possibly because the big 150GB file will just be uploaded in one stream, so I would never be able to do that faster.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
Also to consider, if you otherwise like rsync.net - have you tried the other endpoints for comparison in speed rsync.net Cloud Storage for Offsite Backups

They mention that they are on init7.net's network for the Zurich site and your traces show the same - from your ISP (gigabit.dk) go through Hurricane until GlobalConnect Taastrup (probably through the DIX routers) and then through init7's network to Rsync.net's servers. Somewhere along that path is it slow, probably in Taastrup or farther up the chain if you don't see the same problems uploading to other sites on the internet.

Your ISP peers with these folks PeeringDB and Init7 is PeeringDB
You can try init7's looking glass to see if a trace back to your site looks the same as out to rsync.net or not. Init7 - AS13030
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Also to consider, if you otherwise like rsync.net - have you tried the other endpoints for comparison
Indeed - I have tried both in the US and also hong kong - and experience was similar.

You can try init7's looking glass to see if a trace back to your site looks the same as out to rsync.net or not.
Strangely enough the route from their site to my ip address takes a different route - but they are not using ICMP in their traceroute, so a lot of responses are just * * *

Code:
traceroute to <masked> (<masked>), 30 hops max, 60 byte packets
 1  r1glb3.core.init7.net (77.109.144.217) [AS13030]  0.383 ms  2.284 ms  2.262 ms
 2  r1glb1.core.init7.net (82.197.163.129) [AS13030]  11.306 ms  11.284 ms  11.332 ms
 3  r1zug1.core.init7.net (77.109.140.206) [AS13030]  7.690 ms  7.704 ms  7.716 ms
 4  * * *
 5  * * *
 6  * * *
 7  * * *
 8  r1glb3.core.init7.net (5.180.135.68) [AS13030]  2.900 ms  3.062 ms  3.458 ms
 9  v6-as6939-e-1-7.ixn1.chix.ch (185.1.59.137) [AS212100]  2.147 ms  2.215 ms  2.165 ms
10  100ge15-1.core1.fra1.he.net (184.105.65.30) [AS6939]  6.898 ms  6.718 ms  6.789 ms
11  ve950.core2.fra1.he.net (184.104.195.14) [AS6939]  7.669 ms  6.971 ms  6.992 ms
12  192.38.7.38 (192.38.7.38) [AS1835]  22.179 ms  22.104 ms  22.142 ms
13  gigabit-aps.10gigabitethernet1-2.core1.cph1.he.net (216.66.83.102) [AS6939]  22.119 ms  25.666 ms  24.292 ms
14  albertslund-edge1-lo.net.gigabit.dk (185.24.171.254) [AS60876]  24.995 ms  25.271 ms  25.036 ms
And it seems like it takes a "better" route, even though its longer. hop #9 is still inside switzerland if you can believe the .ch suffix and then from there its only 3 or 4 hops to Denmark.

But end result latency wise are similar - which just shows the distance.

But my guess is that I will never be able to reach my expected speed with rsync as the program to sync because my files are few and very big.

One thing I could consider - is asking my ISP if they support ipv6 (which I assume) - that might give me a different and better route to rsync.net's switzerland site.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
update:

I have with much work (because I know nothing) - finally gotten my pfsense working with ipv6 and have a delegated ipv6 network on my lan.

My NAS is now running ipv6 as well as ipv4 and I have changed my backup script to use prefer ipv6 when rsyncing - and hopefully I will get better speed.
My latency to the server is lower now:
Code:
traceroute6 to prio.ch-s011.rsync.net (2001:1620:2019::229) from <masked>, 64 hops max, 20 byte packets
 1  <masked>.gigabit.dk  0.165 ms  0.099 ms  0.090 ms
 2  <masked>.ip6.gigabit.dk  1.643 ms  0.826 ms  0.963 ms
 3  e0-2.core2.cph1.he.net  2.891 ms  3.034 ms  3.136 ms
 4  2001:7f8:1f:0:1:3030:52:0  3.912 ms  1.577 ms  0.861 ms
 5  2001:7f8:1::a501:3030:2  19.349 ms  13.462 ms  12.879 ms
 6  r2fra2.core.init7.net  17.684 ms  17.496 ms  17.404 ms
 7  2a00:5641:117::4  16.692 ms  16.905 ms  16.917 ms
 8  r1fra3.core.init7.net  17.028 ms  17.217 ms  16.824 ms
 9  2a00:5641:12e::  22.565 ms  22.603 ms  22.997 ms
10  2001:1620:2019::229  22.401 ms  21.708 ms  21.966 ms
Not by much, but 3 ms or so - and if I am lucky everything will just be better :)
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
First run after change to ipv6 was not awesome:

sent 3,844,115,241 bytes received 4,181 bytes 4,541,192.47 bytes/sec - so even worse - but it could have been a bad day.
 

Blinky 42

Active Member
Aug 6, 2015
615
232
43
48
PA, USA
I wouldn't expect the IPv6 and IPv4 behavior to be very different unless the parties involved don't peer with everyone along the way for both v4 and v6 (Google/Cogent/HE known offenders in the past on the v6 side). If they only peered for one protocol then your traffic would end up taking different paths along the way but the traces above suggest that isn't the situation and so whatever bottleneck is in place it is likely to impact v4 and v6 traffic in a similar fashion.
Probably time to experiment with test uploads to b2 or the others to see what performance is like.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
And today
Code:
sent 4,348,307,260 bytes received 5,509 bytes 2,278,392.86 bytes/sec
Even worse 2.2 MB/s that is just plain horrible - I know b2 performs much better, but I was hoping that switching to ipv6 because of the slightly lower latency and because of difference in how ipv6 is routed compared to ipv4 that it would give me just a tiny bit better performance, but it seems to be much worse actually. ( I am thinking about packet fragmentation that apparently does not happen with ipv6)

I will let it run for a couple of days, just to see if it keeps being bad - and then switch back to ipv4 and see if it goes back to its "better" performance and if it does, then I guess ipv6 is out of the question for sure towards rsync.net.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
Code:
sent 5,059,720,524 bytes received 6,696 bytes 2,627,747.19 bytes/sec
So I think that concludes me trying to use ipv6 towards rsync.net.

Now lets hope me switching back to ipv4 at least get me my mediocre performance again and ipv6 has not tainted anthing :)

Edit: Going back to ipv4 show similar abmyssal performance - so something has broke (but not at my side, since I can still iperf outside my network with 1gbps) and also rcync with almost 1gbps to another host on my LAN.
 
Last edited:

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
It seems like rsync.net have had issues on the server I had my backup at - I received a mail a couple of days ago that they would move my data to another and less loaded server - and lo and behold - suddenly I was getting 16MB/s consistently two days in a row. And today it was even higher 23 MB/s - so I am guessing that something was oversubscribed @ rsync.net - but I am glad it is finally allowing me two digit MB speeds.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
The squeaky wheel gets the grease. "Passive shaming" is the tactful way.

And you've done it well.

If rsync.net wasn't knowingly backsliding all along, they should be rewarding you (e.g. 6mo-1yr service credit) for your efforts/aggravation.
 

Bjorn Smith

Well-Known Member
Sep 3, 2019
876
481
63
49
r00t.dk
The squeaky wheel gets the grease. "Passive shaming" is the tactful way.
Hi,
I'm not sure I understand - english is not my first language and even though I consider myself fluent - I still don't understand the meaning of this.

Do you mean to say that because I have publicly shown that I got poor performance, they move me to a better server? If that is the case, then I am dissapointed - if that is why they did it.

My post was in no way meant to point fingers at rsync.net. I have gotten support from their end every time I asked for it, even with suggestions on how to tune my freebsd's network settings etc. So they have been really helpful, even though their help changed nothing.
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
Hi,
I'm not sure I understand - english is not my first language and even though I consider myself fluent - I still don't understand the meaning of this.
Your English is superb! Even so, since you are in N. Europe, I would normally have avoided local [U.S.] sayings ["... squeaky wheel ..."] and figurative usage ["passive shaming"], but you started it! ["... donkey balls ..."] :)
Do you mean to say that because I have publicly shown that I got poor performance, they move me to a better server? If that is the case, then I am dissapointed - if that is why they did it.
I think it's a possibility. Consider: A week+ ago, you described their support folk assisting you--with all suggestions "directed outward" (i.e., implying "it's not our [rsync.net] fault") But, lo and behold, now, the real solution:
they would move my data to another and less loaded server
Is it possible that this exemplifies their methodology for "resource utilization" and "load balancing"? Begrudgingly reducing their profitability in the process. Should I have (above) written, instead: "ONLY the squeaky wheels get the grease"?
My post was in no way meant to point fingers at rsync.net.
And, I interpreted it exactly the way you intended--which is why I used the word "passive" (even emboldening it).
I have gotten support from their end every time I asked for it, even with suggestions on how to tune my freebsd's network settings etc. So they have been really helpful, even though their help changed nothing.
The proverbial "give them an A for effort"--which does often carry the euphemistic, but only implied, addendum "but an F for results".

In summary, I'm glad you've solved the problem. (but I still think you should get a "reward"/service_credit.)