I've been in my new DevOps role for a few months now and have learned a lot about our infrastructure. The organization primarily uses Azure for its services, especially Azure Functions and Kubernetes. In my investigations, I found that many of our Kubernetes resources were deployed using Helm charts, but the deployment wasn't well-maintained, and there's no established source of truth for these deployments. The team often made direct changes in the cluster which complicates the situation, especially as we aim to enhance our security processes for an upcoming audit.
Currently, I'm working on implementing Prometheus metric collection for our resources, which can be tricky since I don't want to overwrite any manual changes made directly to the deployed resources. I've been creating minimal values.yaml files and doing comparisons of rendered manifest attributes to ensure they match what's deployed.
However, I'm questioning whether my efforts are worthwhile. I'm not deeply integrated into the Kubernetes community yet, and I'm unsure if there are existing tools or processes to streamline this reconciliation. I believe establishing a Git source of truth for our Kubernetes deployments could greatly simplify management and boost confidence in our deployments, but I'm looking for advice on whether I'm on the right track or if there are better solutions out there.
5 Answers
Using `helm get values ` is a smart move. You can pull the current values and store them in a version-controlled values.yaml file. In my experience, integrating this with Terraform could provide consistency across your clusters. You could route Helm releases through Terraform, which allows for easier management of dependencies and configurations, though some prefer just sticking to Kubernetes-native tools. It's about finding what fits best with your workflow.
No, you're definitely not wasting your time! Trying to reconcile Helm charts with actual deployed resources is crucial, especially since any direct changes can lead to discrepancies. I recommend running `helm template` to generate your manifests and then commit those to a GitOps repo. Kustomize can also help with managing overlays or patches for additional manifests, keeping everything tidy.
It sounds like you are on the right path by trying to establish a process! If you're open to more tooling, consider using ArgoCD for managing your Helm deployments. It allows you to track the state of your applications and can help maintain a source of truth by pulling directly from a git repository. It's great for syncing your deployed resources with your desired configuration.
I totally get where you're coming from, but just be cautious. When team members directly edit Helm-deployed resources, they can create a mess that tools can't automatically resolve. You've built a good auditing script, but if I were you, I'd consider redeploying the core components from scratch to ensure everything is clean and trackable. It might seem like more work, but starting fresh could save you headaches down the line.
I feel your pain with the direct edits! Managing such divergences is a constant struggle. Minimal values YAML files are absolutely the way to go for clarity. They help in identifying unique configurations without the noise of defaults. It’s about simplifying deployment intentions. As for your concerns on overwriting changes, having a good validation process in place, like the one you’re working on, is essential to avoid breaking anything during deployments.

Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures