I've been testing this particular use case recently, so let me pitch in
It's probably best to read about the Amazon S3 Data Consistency Model:
Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all regions... (If) A process replaces an existing object and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might return the prior data.
So unfortunately there is an undefined amount of time between the object being replaced on S3 and Fastly being able to pull the new version. If you replace an object and then purge on Fastly, occasionally Fastly will pull the old version of the object after the purge request, which would be bad.
Lowering the TTL seems to be the best solution at this moment.
Changing object store might be a large undertaking, but GCS does not have this limitation. Google Cloud Storage Consistency says:
When you upload an object to Cloud Storage, and you receive a success response, the object is immediately available for download and metadata operations from any location where Google offers service. This is true whether you create a new object or overwrite an existing object. Because uploads are strongly consistent, you will never receive a 404 Not Found response or stale data for a read-after-write or read-after-metadata-update operation.