Hello!
I’m evaluating various CDN offerings and trying to do a few simple PoCs to gain some experience.
My organization has a desire to sample (say 1/1000 requests) requests coming through our CDN to evaluate performance and traffic patterns. The specifics on how we process the data don’t matter so much I suppose, but the goal is to be able to get request and response data from our CDN and send the data to another process to be evaluated / stored.
The data I want specifically is:
- The contents of the request AND response headers AND body
- Timing / performance metrics, what it took the backend / origin server to respond to the request
I’m new to Fastly but looking at the docs I see (at least) two options and am hoping somebody might be able to provide some advice or guidance based on your experience.
-
Use the Compute
service and somehow (?) send the data from the compute service to an external service for processing. This is my biggest question I think, is this feasible with the compute service? In the AWS world (which I’m most familiar with) I would be looking to send the data collected in the Compute / edge to a queue / SNS topic in the same region. I don’t see a comparable service in Fastly’s offerings so I imagine the only option would be to send the data to a third party but that feels slow and perhaps costly in terms of compute cost (my understanding is Fastly Compute is supposed to be in the millisecond execution time, but perhaps that’s only blocking requests?). This is really my biggest question, how can I effectively, quickly, without performance degradation get data out of the compute service?
-
(MAYBE?) maybe Fastly’s logging service could provide this data for me, so I don’t have to collect the data myself in Compute? Looking at the documentation though it doesn’t seem like the log format allows the body of request / response to be recorded, which I suppose makes sense, but the body is important to me.
Anyway I’m hoping somebody might have some advice or experience. I have read over some of the tutorials and (for example) know there is one example where a third party redis store is used to store data, so it seems like perhaps there is some president for this type of data exfiltration (if that’s the right word?), but any pro-tips would be very much appreciated!
Thanks for reading!
That’s correct, our logging functionality is intended for metadata about the request and response, not their bodies (and possibly not even headers except those used to carry required information for handling the request/response).
You could use a Compute service to do the sampling, and store the request and response bodies in a KV Store; this would stay ‘within Fastly’ so would provide maximal performance. That service would then need to emit the unique ID (of each request/response that it stored) to some sort of queue for another system outside of Fastly to pick up the contents from the KV Store (and then delete the relevant keys).
It’s unlikely that this would cause the Compute service to consume more than the initial 20ms-per-request allotment, as these are fairly basic operations. Note that the vCPU allotment is for time that the service’s code is actually running, not while it is waiting for a response from a backend or other endpoint.
There would be costs associated with the use of the KV Store this way, and there are limits on them as well (maximum value size, maximum number of keys, and maximum number of operations per second), so you’d need to evaluate those against your planned traffic volume and sampling rate.
Thank you, this great!
I had considered using KV (more via process of elimination vs anything deeper if I’m honest!) but looking more into it I think this would be a good option, especially considering at this point I’m focusing on a PoC and the limitations I’m seeing I don’t think would be an issue.
This bit:
That service would then need to emit the unique ID (of each request/response that it stored) to some sort of queue for another system outside of Fastly to pick up the contents from the KV Store (and then delete the relevant keys).
Things get a bit trickier for me though.
Are there any specific external queue providers that would work better for Fastly than another option? Transparently because I’m most familiar with AWS, my default would be to try to do something with SQS/SNS, but I don’t know if there might not be better options out there? (also I suppose I could speed things up a bit by trying to keep things in the same geographical region perhaps)
Then there is the issue of deleting entries from the KV store once they’ve been processed / picked up by the async job / queue / whatever. I didn’t see any sort of external (outside of Fastly) API for access KV Store (appologies if I missed something obvious, it’s entirely possible as most of these services are brand new to me), so I’m guessing to delete the KV store entries I’d need to actually expose something via a domain / Fastly Compute, protect it with auth, then use that as the mechanism to delete the KV entries? e.g. DELETE https://mysecretfastlydomain.com/delete?kv_item_id=abcdef1337...
, with whatever auth mechanism protecting the URL?
Does this all sound about right?
Thanks again for your time, I really appreciate it!
We do not make recommendations for third-party products, so you’ll have to make your own decision there. Using Compute means you won’t have shielding available as it’s not yet part of that platform, so in order to keep the activity in one geo region you’d have to use a Deliver service with shielding which then sent the request onward to a Compute service (which would be handled in the same POP as the Deliver shield). It’s not ideal, but it’s doable.
As far as manipulating the KV store contents, there are Fastly APIs for those things, here’s an example: KV store item | Fastly Documentation
These can be done directly using any HTTP client you wish, or you can use one of our API SDKs which are available for seven programming languages.
Hey Kevin, thank you so much for taking the time to respond to my question and I’m very sorry it’s taken me half a month to say thanks!
I was able to put together a simple PoC using what we discussed, including sending messages to an external queue to let it know there is new data to be picked up from KV store using the API you mentioned. Overall super easy to get this all working!
I do have some doubts re: the general architecture of what I’m trying to do, it feels a bit like I’m trying to reinvent the wheel. I’m wondering if using something like OTel (OpenTelemetry Part 3: Using OpenTelemetry in Compute) might be a wiser long term option so I’m going to be looking into that a bit (I’m generally familiar with OTel and have used it a bit in the past, but not in the context of Fastly!).
Thanks again for your time and thoughts!
2 Likes