How to Set Up AWS RDS with Terraform and Argo CD?

0
12
Asked By CuriousCoder42 On

I'm trying to establish a smooth deployment process using Terraform alongside Argo CD. My main objective is to create an AWS RDS database with Terraform and then have my application, which is deployed through Argo CD, use the connection string for that database. Initially, I considered using Crossplane to manage this within Kubernetes, but I found the resource updates to be quite clunky and unstable. I'm thinking it might be simpler to let Terraform handle the database provisioning, save the output (the DB URL), and figure out a way to inject that into my application, perhaps through a GitHub Action that updates a Kubernetes secret or Helm values file before Argo CD syncs. Has anyone tackled something similar more effectively? I'd love to hear how you're managing the combination of RDS setup and app configuration with Terraform and Argo CD. Thanks!

9 Answers

Answered By YAMLMaestro On

I've been in a similar position recently, managing multiple Terraform stacks and variables for Kustomize instances. I created a make recipe that pulls Terraform outputs and uses a boilerplate for generating Kustomize patches and secrets, which has been useful for both creation and updates.

Answered By CloudArchitect101 On

If you're already using AWS, consider utilizing AppConfig to let your application pull its configurations. You can set up Terraform to create the AppConfig based on the RDS output—it's a flexible option that opens the door for some feature flagging.

Answered By K8sWizard On

Are you working within an EKS setup or a self-hosted Kubernetes? I’d recommend generating the connection string using the AWS CLI’s generate-db-auth-token command. It’s simpler with EKS pods, but you can manage it even with self-hosted setups.

Answered By DevGuruX On

You could provision your database with Terraform and create a DNS record that directs to its endpoint. Reference that DB URL in your Helm values or templates. This way you can securely store your database credentials in the cloud provider's secret manager and utilize something like the External Secrets Operator to sync them with your cluster.

TechSavvyK -

That’s pretty much how we handle it too. It keeps things straightforward and reliable. My one issue was getting Helm values right when integrating Terraform with an Argo app. I ended up developing a custom provider function that converts a Terraform object to YAML for Helm values, which helped avoid unnecessary double quotes and passed actual variable values!

CloudNinja -

Exactly! This keeps it simple without adding extra layers of complexity.

Answered By DatabaseDynamo On

We handle RDS management through Terraform too, creating route53 records and executing a Lambda function post-creation for database and user management. It does get quite cumbersome at scale, though. We're looking at shifting towards CNCF alternatives like PG Operator for simpler database management.

Answered By InfraNinja On

You can create Argo CD applications with Terraform, injecting parameters like DB connection details, VPC configurations, etc. Let me know if you'd like an example, as that could be really useful!

Answered By StackPilot On

It sounds pretty straightforward, but we maintain a clear separation of responsibilities in our setup. We have specific conventions for config maps and secrets; all environmental variables follow a standard naming convention. When Terraform runs, it generates Secrets and ConfigMaps, which our GitOps services can reference. This structure offers flexibility, especially for ephemeral environments.

Answered By AWSAlterEgo On

First off, consider using IRSA or Pod Identity to eliminate the need for hard-coded credentials. Your app can just use the AWS permissions linked to the Service Accounts. You can pass the database endpoint as an environment variable to the Helm values, creating a standardized set of variables for developers.

Answered By KubeMaster305 On

Create the database using Terraform and store the credentials in the Secret Manager. You can then retrieve those secrets in your cluster using the external secrets operator, which could streamline the process nicely.

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.