I recently checked out a video by Anton Putra talking about setting up ArgoCD for production, which was really insightful! He suggested letting developers deploy their apps to the development environment, then freezing that environment at a scheduled time to promote it to staging. After running tests there, it would finally be pushed to production when everything looks good. This process uses a Python script he created for handling changes like annotations and git pushes.
Here's where I'm a bit skeptical: is this really a best practice? It feels somewhat like an anti-pattern to rely on a script like this. Also, how do you maintain consistency across all environments? Anton mainly showcased the image updater, which seems fine since it can keep pulling the latest images once staging is unfrozen. But does that mean you have to manually copy manifest files from the development folder to staging, double-check everything, and then do the same for production? I'm keen to hear how others manage this and their thoughts on Anton's approach!
3 Answers
In the GitOps paradigm, something has to trigger a commit to promote deployments, whether it's a script or even a manual process. Running post-deployment checks is also essential, which adds another layer of action to consider.
We built Reliza Hub to tackle this issue. It’s a SaaS solution that allows you to manage deployments without needing to freeze your environments. Plus, you get a full audit history and can roll back to any previous state if necessary.
We use GitHub Actions for this. Our DevOps repo holds all our Helm charts, and when we're ready to ship, we create a release in GitHub, adjust the version number, and that kicks off an action to tag everything. Then we manually sync the changes in ArgoCD, which works pretty well for us.
Exactly! Whatever method you choose, there needs to be a reliable way to ensure everything’s functioning after deployment.