Technology and Software Blog.
7th February 2015
If you are Ops savvy and can work around the long TTL of CDN caches, then hosting in CDN may be a high performance and cost effective option. There are gotchas and down-sides though so, read on.
What you are reading (on www.daniellarsen.nz) is served 100% from CDN. The DNS records on the domain are configured to send all requests to the CDN (not the Azure Website). As long as I get my client-side optimisation right (minification, bundling and other optimisations) site performance should start to approach the bare metal of the CDN infrastructure. In other words, the performance of this site is tuned by the Azure CDN infrastructure team, and I'm cool with that.
This works because, as described in Hello World! A static website/blog with Azure, GitHub and Grunt, this site's content is generated as static HTML files at deployment time. When I author a post and deploy (from GitHib) the newly generated .html file (now in the Azure Website's storage blob) is immediately available to CDN cache.
Serving an entire website from CDN is an aggresive (some would say foolish) strategy that only works if your content is truly static. The first time you need any kind of dynamic content served (that is not via AJAX) this fails hard. For my purposes it's fine, but there are still some gotchas.
The default TTL for Azure CDN is 72 hours. If you modify an object in CDN you may have to wait up to three days for that modified object to be served, and possibly longer for browser client caches to expire their copy of the content. That is going to be a long three days if you have an embarrassing typo on your home page. Here are three ideas on how to mitigate the effects of TTL.
The first idea is to [reduce the TTL for HTML objects in the CDN]. For this site I have set the TTL for pages and posts to eight hours and the TTL for CSS, scripts, fonts and images to three days. I can live with my site's HTML content only being updated three times a day. It is slow moving and visitors currently number in the ones. CDN TTL is a trade-off between freshness and performance/cost. I would not recommend a short TTL on a very busy site as it will result in more origin requests which incur a performance hit and a cost - you pay for the storage hit and the data transfer for origin requests (as well at the CDN outbound data). Using web.config files you can set different TTLs for different folders which is handy. Large truly static assets like images and third-party JS/CSS libraries should always have a long TTL.
<configuration> <!-- ... --> <system.webServer> <staticContent> <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="0.08:00:00"/> </staticContent> </system.webServer> </configuration>
Snippet from Web.config that sets the CDN TTL to eight hours. You can drop Web.config files in folders to vary TTLs on a folder-by-folder basis.
The second idea is to treat any post or page you write as an immutable document. In other words don't ever modify or delete a page once you have published it, create a new version instead. New versions are served immediately and can be cached for a long time. Your URL must include the version number, either in its path or as a query (this won't work for index.html), so multiple versions will be indexed by GoogleBot. The Google index problem can be solved with effective use of the canonical link element or header.
<link rel="canonical" href="http://www.daniellarsen.nz/posts/azure-websites/performance-with-azure-cdn.html" />
The third idea is to use a clever combination of static and dynamic content, where the basis of your site is loaded from CDN which bootstraps a dynamic app (an Angular app for example) that calls to an API for all dynamic content. Hat-tip to my friend Reece at work who sowed the seed of this idea in my head.
A good overall strategy would be to combine all three of these approaches, and the busier the site got, the more it would look like a traditional web-app. But it would be built "performance-first", if you know what I mean. In Cloud Compute CPU is expensive so any time you are caching you are winning, as long as you are not inconveniencing your users too much.
There are only two entries in the DNS record for daniellarsen.nz:
@ A 188.8.131.52 3600 www CNAME az714222.vo.msecnd.net 3600
The A record is for redirection from the root to www (using a service provided by my registrar). The CNAME for www resolves to the CDN, not the Azure Website. Note that there is a TTL on the domain records as well. In this case the Domain TTL is one hour which gives a one hour turnaround should I change my mind and want to host on the Azure Website itself.
It's pretty hard to measure performance with a single digit daily visitor count. But, Google Developers PageSpeed Insights is a pretty good benchmark.
The first time I run this brutally honest web performance analyser it gives a warning for a server response time of 1100ms (I've seen it complain at 470ms). This is because the site is completey cold. No one has hit it for over eight hours (please stop laughing) so the CDN performs an origin request that wakes up the storage on the (Free) Azure Website.
Wait 30 seconds (to expire PageSpeed's cache) and run it again and the warning goes away. It's fast and Google knows it. There is some work to do on bundling and minification, but otherwise, not a bad result. I am ready for some real traffic now to put it to the ultimate test.
The beauty of hosting from CDN is that (in this case) the Azure Website does not do any hosting, so a Free tier website is fine. The custom domain name is associated with the CDN, not the Azure Website so their is no Shared tier requirement.
CDN pricing starts at about NZ$ 0.17c* per GB of outbound data and gets cheaper as the Terabytes go up and/or the region gets more central. You also pay for outbound data from your Azure Website Storage Blob to the CDN edges. The first 5GB are free and then it's approx NZ$ 0.17 per GB after that (and gets cheaper as the Terabytes go up and the Region gets more central). Traffic to the same Region is free so, because my site is hosted in Sydney, I don't get charged for outbound traffic to the Sydney CDN edge.
* Brazillian data starts at NZ$ 0.22c per GB.
I know what you are thinking: "Yeah right, but what does it really cost?" For the answer to that come back soon as I work on the numbers and make observations from my experiment that you are reading right now.