Middleware with VCL?


#1

Hi,

I’m a Fastly customer and I want to figure out if i can push more Fastly to reach my needed.

Scenario:

  1. User request the website
  2. Browser start to make all the request (html, css, js, jpeg, etc.)
  3. I want to create a middleware in VCL that need to make:
    3.1) if the page is HTML, then sent that page to a specific service/API
    3.2) Now you received the HTML back, then sent that page back to the User and CACHE IT in Fastly
    3.3) if another user request that page, then serve the CACHED version

Basically a middleware able to process some files (html, js, css) before sent the files back to the User.
Is it possibile with VCL?

Thank You,
D.


#2

Hi @darez81,

It seems like you are describing the default behaviour of Fastly. If you are already requesting your website through our network, then all you need do is ensure that the assets you want to cache have appropriate Cache-Control headers to tell us whether we can cache them, and for how long.

More info: https://docs.fastly.com/guides/tutorials/cache-control-tutorial


#3

I don’t have the control on the origin, so i want to create that logic in Fastly.

  1. If is .JS => cache it
  2. If is .html => cache it
  3. If is not js or html => don’t cache it

I know how Fastly work, the problem I see is related about how to create the logic in Fastly to cache it depending on some condition because i don’t have the control on the origin.


#4

Oh, I see, now I understand what you are trying to achieve. Yes, you can
indeed modify the TTLs of cache objects in Fastly using edge logic,
overriding the cache control directive from your origin server. To do this
you just need to set beresp.ttl to a time of your choice.

Rather than making this decision based on a file extension in the URL, it’s
better to do this based on the content-type of the response. I made a
demo to help you try this out:

https://fiddle.fastlydemo.net/fiddle/78c50399


#5

Thank You, this is exactly that i was looking for.
return(pass) means “do not cache it” right?

d.


#6

In short, yes, however, return(pass) means different things depending on where you use it.

When used in vcl_recv, returning pass will directly move the flow to the fetch stage without performing a lookup. This means that when the backend object is fetched, there is no associated cache address, so regardless of what TTL you give it, it will not be saved in the cache. Next time that URL is requested, the same thing will happen again, assuming you continue to pass in recv.

If you return lookup (the default) from recv (which will also happen if you have no custom recv code) then a cache lookup will be performed and if there is no hit, we will create a cache entry in advance of the backend fetch. Therefore, by the time we get to fetch there’s already an entry in the cache. If you return pass from fetch in this situation, we’ll mark the new cache entry as a ‘hit for pass’ and save it. Next time the URL is requested, the lookup will hit that cache entry, and because it is marked for pass, we’ll perform a backend request anyway.

This difference may not be worth remembering in your case but in edge cases it can be relevant.