How Can I Properly Send Logs to S3 from EKS Pods?

0
17
Asked By CuriousCoder97 On

I'm running workloads in an EKS setup where I have an S3 bucket mounted inside my pods using the s3fs or CSI driver. While this works fine for configuration files, I'm facing issues when trying to use the same S3 mount for my application logs. The application can write logs to a file, but S3 doesn't allow modifying or appending to those files once created, which means my logs never get updated. I want to utilize S3 for logs because it's a cost-effective solution, but this limitation is a significant hurdle for me. What are some effective ways to get around this? Is there a better strategy for pushing container logs to S3 from my EKS pods?

5 Answers

Answered By SysAdminNinja On

This or you could use Fluent Bit. It has an easy Helm installation and you can create a values file tailored to your environment!

Answered By TechSavvy234 On

I wouldn’t recommend using s3fs at all. Remember, S3 is designed as an object store, not a traditional filesystem. Instead, consider setting up a logging sidecar that works with a proper log shipping framework. This could save you from a lot of headaches later on.

Answered By CloudGuru42 On

You're right to avoid using an object store like that for log writing. They don't handle filesystem access the same way, which is why you can only create, read, and delete files without any appending. You'd often end up re-uploading the entire file for changes, which is inefficient.

Answered By DataDrivenDev On

Consider using Loki along with Grafana Alloy for storing and querying logs directly from S3. It streamlines the whole process.

Answered By LogMaster3000 On

Definitely go for a log shipping tool rather than using a mounted S3 directly for logs. You'll run into many issues if you don’t.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.