Bandwidth Folly

Lukas at Pexels
I’m about to move my domains to a new server, and I couldn’t help notice that 3 of them are shifting a large amount of data, which I couldn’t explain.

Data has never been an issue before – the tariff I was on was unlimited – but the new cloud solution has a limit of 50Gb per month, with (very reasonable) charges if you go over that.

50Gb for a half-dozen personal websites – should be plenty eh?

But no. My stats were showing 60Gb as a usual usage, with peaks going up to 90Gb.

Now, I would be delighted if that many people were visiting my sites, but I know that not to be the case. So I I spent this lunchtime looking at the logs.

Boy, am I embarrassed! 😀

A while back – 2017, I think, I started using Hetrix. I came across them as a solution to email server blacklisting – they monitored the lists for your domains and warned you if you appeared on any of them. But they also did an uptime monitor that would check your websites and shout if they go down.

I remember setting these up very conservatively – I thought I had it checking once an hour. I wanted to know about downtime, but it was not critical.

Sometime between then and now, the 1 hour option disappeared. Instead, it was set to the default of 1 minute. Each, for 3 websites.

But that wasn’t one poll for each website every minute. You had to specify 3 servers that would poll you – New York, London, Berlin. So that is a total of 3 polls of 3 websites every minute.

What were they polling? I just pointed them at the website, so it would hit the front page. According to my logs, each hit was about 140K.

So that is 9 hits a minute, each about 140K, times 60 minutes, times 24 hours, times 30 days. That’s 55Gb/month, without any other traffic.

That has now changed. Unfortunately, I now have to specify 4 servers to poll me (another change that only happened when I made adjustments), but it is polling just one of my websites, on the basis that if the site is down, the whole server probably is. It is polling every 10 minutes, which is the longest Hetrix allows, now. And it is no longer hitting the front page, but a page containing just the word hetrix. With the overhead of headers, that still comes out to about 200 bytes, but better than 140K.

So the calculation now is 4 servers, times 1 website, times 200 bytes, times 60/10 minutes, times 24 hours, times 30 days. That comes to under 6Mb/month, a big improvement on 60Gb.

One Comment

  1. Chris
    July 5, 2020
    Reply

    So having moved the sites to a new cloud server, and having taken care of the Hetrix thing, we seem to be ticking along at about 1Gb per day. At that rate, we will do 30-35Gb a month, well within my 50Gb a month free allowance.

    However, doing the maths, even if I peaked one month at 150Gb, something the combined sites have never done, that would simply cost me an extra £2 + VAT.

    So not much to worry about, for normal web traffic, but it is also worth putting in some soft limits (i.e. reporting but not stopping) at a reasonable level. Because people can be arseholes, and it would be simple to set up something to hammer a website, and drive up traffic. deliberately

Leave a Reply

Your email address will not be published. Required fields are marked *