We have logging and S3 integration enabled to send log files as compressed archives to a S3 bucket. Seems that log files inside the archives have invalid timestamps from the year 2042. Is this known flaw and is the any workaround. This is not impediment, but raised curiosity.
Example:
Hi There. That is quite odd- is the filename the whole thing, i.e. Jul 16 2042 2024
or is it just the 2024-04-07...
part?
The filenames can be configured as per Changing where log files are written. Do you know if this has been done in this case, or is just default settings.
That “Jul 14 2042” is file system timestamp and rest is file name.
Was any compression used or was this how the log was in s3?
Gzip compression enabled on Fastly logging settings, so the files are compressed by the Fastly before sending files to S3. File name example is from extracted file.
That is very strange. I have just tested this and got the following for one of my files:
stat 2024-04-09T11\ 31\ 00.000-Z6uFj6N9mBH8oXoHa0ry.log
File: 2024-04-09T11 31 00.000-Z6uFj6N9mBH8oXoHa0ry.log
Size: 567 Blocks: 8 IO Block: 4096 regular file
Device: 253,1 Inode: 11142304 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/user) Gid: ( 1000/user)
Access: 2024-04-09 12:34:12.628387412 +0100
Modify: 2024-04-09 12:34:10.856373299 +0100
Change: 2024-04-09 12:34:10.884373522 +0100
Birth: 2024-04-09 12:34:10.856373299 +0100
This is an extracted gzip file. If you start a support ticket for your specific service I’d be happy to assist you further.