Best Practices for Running Database Migrations in Kubernetes

0
8
Asked By TechWhiz42 On

I'm trying to figure out the best way to handle database migrations within my Kubernetes cluster for my Symfony and Laravel applications. I'm using Kustomize and my apps are managed by an ApplicationSet with ArgoCD. I've explored a couple of methods but ran into problems:

- **Init Containers:** These can start multiple times simultaneously when scaling, which is risky for migrations that work with the same database. They don't prevent the main container from starting even if the init container fails, which isn't ideal for maintaining the previous version of the app.

- **Jobs:** Jobs in Kubernetes are immutable, which presents challenges like failing to overwrite existing jobs on new image deployments. I also can't use generated names to bypass this because of my use of kustomization and ApplicationSet, further complicating things.

I want a straightforward solution: whenever the image digest in a kustomization.yml changes, it should trigger a migration script that runs first. If this script completes successfully, only then should the regular deployment proceed. The migration task must run only once per image digest change, should not trigger on scaling events, and in case of failure, it should halt the deployment while keeping the current version live.

Additionally, I'm using ArgoCD sync waves, with the migration job set on sync-wave 1, while deployment occurs at sync-wave 2. Anyone else dealing with a similar scenario or have some potential solutions?

4 Answers

Answered By CleverCoder21 On

You could also consider using a lock in your database schema that the migration itself sets before execution. This way, if multiple migration containers run, only one actually processes the change while others wait. This can prevent conflicts and might suit your deployment system's needs! Plus, if any issues arise during the migration, you can tackle them without risking your existing deployment.

Answered By CloudNinja83 On

I use ArgoCD sync waves too! It's been quite effective for handling migrations. Have you looked into a PreSyncHook? That can run a Job specifically for migrations. The ArgoCD documentation has examples that could help you set this up correctly. I'm sure it can fit your needs nicely!

Answered By SysAdminPro On

Just a thought, but since you're using ArgoCD, why not look into their documentation for migrations? They offer a solid example with a Job annotation for PreSync. It can definitely align with your deployment strategy, and it could simplify how you manage migrations in the future.

Answered By DevGuru99 On

While I understand your situation, we typically use pre-upgrade or pre-install Helm hooks for database migrations. When we considered moving away from Helm, we did a proof of concept with a pre-deploy job that served as a dependency for a deploy job using Flux. Maybe something similar would work for you?

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.