I've looked through a lot of resources on sending EKS audit logs to an S3 bucket, but many seem outdated. What's the best current method to achieve this in 2025?
3 Answers
I recently researched this too. Since EKS audit logs go to CloudWatch, creating a custom tool (like a Lambda function) is one way to store them in S3. However, if you're worried about high ingestion costs from CloudWatch, using a CW to Lambda to S3 setup might still hit those costs. I found this GitHub issue helpful: https://github.com/aws/containers-roadmap/issues/1141.
Essentially, you'll want to send the audit logs to CloudWatch Logs first, just like any other logs. From there, you can set up a process to transfer them to S3.
As far as I know, EKS audit logs go straight to CloudWatch Logs. So, you’d typically need to pull from CloudWatch to get them to S3. I usually use Kinesis Firehose for this. If you're planning to work with solutions like Splunk or Azure Sentinel later, you might need some Lambda functions for log formatting. That’s just my take on it!
You actually don’t need Lambda if you want something simpler. Just set up CloudWatch Logs to go directly into Kinesis Firehose and then into S3. Here’s the documentation: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateDestination.html.