Hey @PaulRudin , good questions…
When a request arrives at the service is the cache checked before the edge code is run? And if there’s a hit then the edge code doesn’t run?
Not really. For a compute service, when a request arrives, the compute code will run without checking a cache.
What about fetch requests to an origin server from edge code - is there an automatic cache check here before the request is made of the origin server?
Yes, there’s some automatic caching at this layer. It’s a pass-through cache that will cache origin responses unless the origin request is set as a Pass, and only for if HTTP status of the origin fetch is one of 200
, 203
, 300
, 301
, 302
, 404
, or 410
. For cached responses, there’s a default TTL of 2 minutes, unless it’s set via response headers from the origin. There’s a fuller explanation at Cache freshness and TTLs | Fastly Developer Hub.
Or is it necessary to manage a cache explicitly in your edge code if you want to avoid making an origin request? If this is so then alternatively one could chain the edge service with a vcl intermediate service to manage the caching, rather than dealing with it in your code?
This is an option too. Generally I think about it depending on what you’re trying to achieve. For example if you’re assembling cacheable pages at the edge (personalization, Graphql, etc.), then the request flow might go like
user -> vcl service -> compute service -> origin(s)
Conversely, if you’re using compute@edge as a router, then you might want a service chain like
user -> compute -> vcl service(s)
For that first use case, building cacheable objects in compute, we’re now exposing cache controls directly in C@E, so there’s less reason to put Compute behind a VCL service. If you want to check out those interfaces, here is the pertinent section of the Rust SDK.