Having configured several sites with Fastly (and some of those also with Cloudflare and/or CloudFront), I have a few general pointers. If you have specific questions or problems, I’m happy to help!
Use a separate Fastly Service for staging
Nothing in Fastly stops you from running staging.example.com
and production.example.com
through the same Service. But there are some features (like TLS versions) that can only be configured at the Service level. For confidence in rolling out changes to your configuration, put staging in a separate service.
This is especially true if you’re using apex domains and/or terraform.
Use a separate apex domain for your staging service
I’ve tried www.example.com
and staging.example.com
in the past and it creates a lot of problems. You want staging to look as much like production as possible. Apex domains work pretty well on Fastly, but they have some limitations (both in Fastly and in the web in general) that you want to know about early. Use example.com
and example.dev
and save yourself a lot of headaches. (Some things that are different between apex domains and others include DNS anycast, CNAME support, and cookie handling.)
Make sure your content is actually cacheable
If your response includes a set-cookie
header, Fastly won’t (and shouldn’t) cache it. If you have a resource that is composed of five other pieces of data, it needs to expire at the rate of the fastest of those. Decompose resources and make more, faster HTTP requests. (Sometimes, your data will tell you that you really do need to group things. For that, Edge-Side Includes and other similar compositional techniques can help.)
Set cache-control
headers in your origin
Your origin knows the most about the data, its dependencies, and how often it will change. It is the best place to define cache behavior. You can override TTLs in Fastly, but do this only when all other options have failed.
You’ll get very far with stale-while-revalidate
This is specially true for “popular” resources. Every resource is different, but if I don’t know what cache behavior to pick, I’ll start with
cache-control: public, max-age=0, s-maxage=60, stale-while-revalidate=15552000 stale-if-error=36288000
That will cause browsers to never cache the response, but Fastly to cache it for 1 minute, then continue to serve the stale version for an additional 3 days while it attempts to get a fresh version. Each Fastly node queues up all the clients asking for the same resource, so your origin will only get one revalidation request for all the simultaneous clients each time a resource expires.
(If stale-while-revalidate
isn’t cutting it for you – for example, if you need to expire resources faster than 60s – you’ll have to switch to long cache times and active purging. Cache tags are the next easiest solution.)
Use cache tiering
If your users are all geographically concentrated, this may not matter much. If they’re mostly concentrated, but with a long global tail, cache tiering will dramatically improve performance for the folks outside of the core area.
Beware cache fragmentation
If you have a resource that varies by language, user-agent, or some other input that has huge variation, HTTP caching won’t buy you much out-of-the-box.
For example, say my browser sends accept-language: en-US,en;q=0.5,fr;q=0.4
and yours sends accept-language: en-CA,en;q=0.6;fr;q=0.1
. If the server supports en
but not any country-specific variant, those will both resolve to en
. But (assuming the response has a proper vary: accept-language
header) they’re cached separately.
To improve performance and reduce load, you can have Fastly decompose that accept-language
header to a resolved-language: en
header and vary
on that. Check out Financial-Times/polyfill-service for a sophisticated example of this.