I'm managing backup jobs between different systems and S3, and to keep costs down, we've archived old data to Glacier. However, sometimes these jobs try to access older files that are now in Glacier, resulting in failed attempts because these files need to be manually restored. Is there a way to set up a system that triggers an automatic retrieval of a file from Glacier for a period of about 7 days after a failed access? This would give the job enough time to rerun and access the files as needed.
3 Answers
Have you thought about catching those errors on the client side, maybe through logs? It could be a more straightforward way to track these cases.
CloudTrail is a potential solution, but it can cost a lot. A better idea might be to utilize your job history. Keep an eye on failed jobs, restore them in batches periodically, and add some monitoring for these failures to help manage costs in the long run.
Currently, there’s no built-in method in S3 to automatically trigger a response after a GetObject failure. However, you can enable CloudTrail Data Events for S3, which will log these events in an S3 bucket and send an event to EventBridge, which could serve as your trigger for restores. Just be cautious, as this could generate a lot of data. You can check out a blog I found that elaborates on this setup.
Just a heads up, the intelligent tiering part isn’t quite right. You still need to manually restore those objects even with the instant access tier!

Thanks a bunch! I’ll dig into that more.