How to set up a global Geolocation DNS load balancing Varnish Cache Content Delivery Network with Bind9, geo-ip database, Varnish Cache and DigtalOcean. DIY High-Availability for your website!
I felt it was time to take Sysadmins of the North to the next level, it was time to expand with a global DNS load balancing and Varnish Cache (CDN) service. Here is how I set up my Geo-location load balancing Varnish Cache HTTP reverse proxy CDN.
It’s all for the fun, various configs are not advanced and may not be optimized.
By using a geo load balancing DNS set-up, based on Bind9, visitors are directed to the nearest Varnish Cache node. One is based in the US (New York) and one in the EU (Amsterdam, NL). You can easy expand this set up to other locations on the globe. Besides using IIS Outbound Rules to create a Content Delivery Network, you can also create your own CDN with PHP.
A set up like this makes the content physically closer and faster available to your visitors, increasing website loading speed (decreasing loading times).
First you need some DigitalOcean droplets. I chose Debian 7 (Wheezy), one located in NYC2 and one in AMS2. The smalles instance will do fine for low traffic sites. After your droplets are created, log in as root and change your root password. Add a ordinary user and then disable sshd’s
Open up your sshd_config file
and change PermitRootLogin from yes to no:
and restart ssh.
Update your packages
apt-get update apt-get upgrade
Now it is time for you to install Bind 9 on your Debian Wheezy droplets. For this, follow the excellent chrooted bind9 with geodns under debian wheezy guide by Pawel Kudzia. This will set up GeoDNS too.
/etc/bind you have to create a directory
zones, and set the owner/permissions correct. Then create two zone files, for example:
; Zone file for cdn.example.com $TTL 14400 cdn.example.com. 86400 IN SOA ns1.example.org. admin.example.org. ( 2014102909 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ) cdn.example.com. NS ns1.example.org. cdn.example.com. 300 IN A 18.104.22.168
; Zone file for cdn-us.example.com $TTL 14400 cdn.example.com. 86400 IN SOA ns1.example.org. admin.example.org. ( 2014102909 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ) cdn.example.com NS ns1.example.org. cdn.example.com 300 IN A 22.214.171.124
And don’t forget your glue- and NS records, see the Bind 9 Administrator Reference Manual for more information on how to set up your zones.
Once you’re satisfied with the zone files and Varnish Cache configuration, start bind 9 and varnish:
service bind9 start service varnish start
Verify the Geo DNS results using whatsmydns.net and www.just-ping.com.
Protip: Never copy/paste anything to put into production without testing.
This post and set up was inspired by the following articles (in no particular order):
Using the online available guides, some own ready to use knowledge and cheap DigitalOcean droplets, you can easily create your own global Content Delivery Network, or CDN. I created this set up in about one hour.
A global DNS load balancing set up like this, with a Varnish Cache back-end, makes content physically closer to your visitors and speeds up your website. They will like that 🙂
Maybe this all is a bit too much for you. You can always use IIS Outbound Rewrite Rules or a PHP and .htaccess configuration to create an Origin Pull Content Delivery Network – or CDN -, to offload content to different hostnames.
My name is Jan. I am not a hacker, coder, developer, programmer or guru. I am merely a system administrator, doing my daily thing at Vevida in the Netherlands. With over 15 years of experience, my specialties include Windows Server, IIS, Linux (CentOS, Debian), security, PHP, WordPress, websites & optimization. Want to support me and donate? Use this link: https://paypal.me/jreilink.
Release hold queue email in Postfix – postsuper
Recursive scp and symlinks
Install Varnish Cache on CentOS 6.7
Install Elasticsearch on CentOS 6.7
Generate pseudo-random passwords with OpenSSL
Turn off swap
New Open-Xchange merger: PowerDNS
Monit monitoring on Ubuntu 14.04 VM on Hyper-V