I'm running workloads in an EKS setup where I have an S3 bucket mounted inside my pods using the s3fs or CSI driver. While this works fine for configuration files, I'm facing issues when trying to use the same S3 mount for my application logs. The application can write logs to a file, but S3 doesn't allow modifying or appending to those files once created, which means my logs never get updated. I want to utilize S3 for logs because it's a cost-effective solution, but this limitation is a significant hurdle for me. What are some effective ways to get around this? Is there a better strategy for pushing container logs to S3 from my EKS pods?
5 Answers
This or you could use Fluent Bit. It has an easy Helm installation and you can create a values file tailored to your environment!
I wouldn’t recommend using s3fs at all. Remember, S3 is designed as an object store, not a traditional filesystem. Instead, consider setting up a logging sidecar that works with a proper log shipping framework. This could save you from a lot of headaches later on.
You're right to avoid using an object store like that for log writing. They don't handle filesystem access the same way, which is why you can only create, read, and delete files without any appending. You'd often end up re-uploading the entire file for changes, which is inefficient.
Consider using Loki along with Grafana Alloy for storing and querying logs directly from S3. It streamlines the whole process.
Definitely go for a log shipping tool rather than using a mounted S3 directly for logs. You'll run into many issues if you don’t.

Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures