I've looked into various methods for sending EKS audit logs to an S3 bucket, but most of the resources I found are outdated. Can anyone share the best practices or updated methods for doing this in 2025?
4 Answers
From what I've researched, EKS audit logs go straight into CloudWatch Logs. To get them into S3, you'll typically need to pull the logs from CloudWatch. I prefer using Kinesis Firehose for this purpose, especially if you need to format the logs for tools like Splunk or Azure Sentinel via Lambda.
Have you considered using the Kubernetes logging operator with Fluentd and Fluent Bit on your worker nodes? You can send logs directly to S3 from there. Check out this example for more details: https://kube-logging.dev/docs/examples/
I delved into this topic too. EKS audit logs land in CloudWatch, and to store them in S3, you might have to create a custom solution, like a Lambda function. However, if the high cost of CW log ingestion is a concern, going CW -> Lambda -> S3 won't solve that. You might want to check out this GitHub issue for more insights: https://github.com/aws/containers-roadmap/issues/1141
You can transfer EKS audit logs to an S3 bucket similarly to how you would send any CloudWatch log. Just set up the right configuration to pipe it over.
Actually, if you're not looking for too much customization, you can do it without a Lambda function! Just configure the setup from CloudWatch Logs to Firehose and then to S3. Here’s some documentation to get you started: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CreateDestination.html