Apparently, read after write for new keys in S3 has been consistent for some time now. However, read after update is still eventually consistent.
I read an article from Hashicorp about how they use a surrogate key to invalidate all S3 data for a given website: https://www.hashicorp.com/blog/serving-static-sites-with-fastly--s3--and-middleman/
But that doesn’t seem to deal with the issue of updated content? It still seems possible for this flow:
- New static site is generated (some new pages, but mostly updated pages)
- Fastly cache is invalidated for whole S3 static website
- Page is requested via Fastly before updated key is consistent on S3
- Fastly forever serves the old S3 page
I know Google’s equivalent, GCS, doesn’t have this issue, but wonder how to mitigate this with S3?