Mixing beresp.ttl and Expires header


I’m trying to implement constant cache time that won’t be affected by the fact that shield is configured.
Usually, if I set 1h TTL it might be up to 2 hours of real TTL if edge hits shield at the end of its TTL.

So I’m trying to use Expires header to make cache really constant, but there’s a lack of documentation on how exactly Expires header works.
How beresp.ttl and beresp.http.Expires header are related, how one affects the other?
If I set “beresp.http.Expires” on vcl_ferch, will it work, or it must be set on origin?
What happens if both are set? What’s the priority?

I created the following snippet, that seems to do what I’m trying to achieve:

  if (!beresp.http.Expires) {
    declare local var.expire_time TIME;
    declare local var.my_ttl RTIME;
    set var.my_ttl = 1h;
    set var.expire_time = now;
    set var.expire_time += var.my_ttl;
    set beresp.http.Expires = var.expire_time;
    set beresp.ttl = var.my_ttl;
    unset beresp.http.cache-control;

But I’m not 100% sure, because when I save ‘beresp.ttl’ at the end of vcl_fetch, for debugging, on edge it shows 120 (nothing in my VCL sets it to 2m).
Also here I set both beresp.http.Expires and beresp.ttl on shield, because not sure only expires will work.

Will be happy to get more detailed information about all this.


Hi Ilya,

This actually should not be necessary. If you are using Expires as your cache-freshness indicator, then that is an absolute moment in time, and is unaffected by how long the object has already been in cache upstream.
For example, if you send this with your object from origin:

Expires: Friday, 26 October 2018 12:00:00 UTC

… Fastly will set a TTL such that the object expires at that time. Then if sometime later, the object is fetched again from an edge location, the shield will serve it from cache, and the edge will set a shorter TTL, again ensuring that it expires at the specified time.

However, we actually recommend that you prefer the Cache-Control header for specifying freshness. It is more modern and more powerful and flexible. You would set the following header at your origin:

Cache-Control: max-age=3600

And the shield would store the object for an hour. Now, imagine 30 mins later, an edge needs to fetch the object. The shield will send the cached version, retaining the Cache-Control header that you set, but it will add an Age header as well:

Cache-Control: max-age=3600
Age: 1200

The edge will set a TTL of 1200, by taking the desired TTL expressed in the Cache-Control header, and deducting the Age, ie. the amount of time that the object has already spent in cache.

Finally, a few points in relation to your VCL code:

  • In the vcl_fetch subroutine, adjusting headers such as beresp.http.Expires or besrep.http.Cache-Control does not affect the amount of time that the object is cached within Fastly. It only controls the behaviour of downstream caches (including Fastly edge nodes if the current node is a sheild) and the browser client.
  • Adjusting beresp.ttl does change the amount of time that we store the object, but doesn’t have any effect on the headers, so is the compliment to setting the headers
  • Two minutes is our platform default TTL, so if we can’t work out what TTL you want, you get 2 mins!

If you haven’t already, you might find it useful to try our fiddle tool:


Hope this helps!



Wow, that’s great idea substracting age from my ttl. I assume I’d need std.atoi for that, but it seems easy.
Also I’d need to do this math again on edge’s vcl_deliver in order to set right cache control…

So i guessed the right approach with expires too, just had trouble ensuring it, because beresp.ttl value is uneffected by expires from upsteam, even though it seems to work right.



The Age behaviour is automatic. You should not need to do anything!


i’ve been thinking the same as you OP, but never actually done it. lately i get little sleep because i buy oral steroids as they are the only thing to make me sleep and my focusing is losing. i would really be interested if you have any updates for us. thanks.