Best Practices for Central Logging in Kubernetes Clusters

0
2
Asked By TechWanderer99 On

I'm currently setting up a central Kubernetes cluster to run kube-prometheus-stack and Loki for logging purposes. The goal is to allow developers to spin up their own clusters to work on code, then tear them down when they're done, while still being able to compare metrics and logs from their previous clusters. We're planning to build a sidecar for the central Prometheus, acting as a gateway API to connect the individual clusters. Given this setup, is there a more effective approach for central logging? Just to clarify, we can't use multiple namespaces instead of separate clusters. Any suggestions? Thanks!

2 Answers

Answered By DevOpsNinja On

We've opted out of centralized logging and instead have our observability tools deployed on each cluster like with Argo. Our reasoning was to avoid becoming a bottleneck for log data and let each team manage their own observability. It’s working out well for us so far!

DataWatcher21 -

That’s interesting! We’re doing the same thing, but when clusters are destroyed, we lose all metrics and logs. It makes comparing different versions harder, though.

Answered By LogGuru42 On

Be careful with this approach! It sounds good on paper, but you might run into issues if one developer accidentally fills up the log storage with too much data. It's crucial to separate production logs from non-production ones to prevent one from affecting the other.

LogMaster88 -

I've seen similar situations where ingestion gets overwhelmed. You could set up a distinct ingress for log collection or add identifiers to logs to track which service is causing the issue.

StreamlineBob -

Throttling on the log collector could help! Limiting each pod's log output makes it easier to spot any offenders without impacting overall performance.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.