I'm looking to transition my local Docker Compose setup, which includes services like Apache Airflow, Grafana, Streamlit, MLFlow, Postgres, and a Jupyter notebook server, to AWS. I want to know which would be the most cost-effective deployment strategy for this replatforming. Additionally, how can I enhance security? I currently have hardcoded passwords in my setup; can I integrate AWS Secrets Manager to secure sensitive data for my services? I run this setup locally for a side project and am eager to leverage the cloud.
5 Answers
If you're aiming for scalability, ECS is a solid choice. But for a simple proof of concept, just running your Docker Compose in an EC2 instance might save you some money. It's straightforward and doesn't require much setup.
ECS with Fargate might be your best option if you want to skip managing servers. It works seamlessly with Docker Compose, and using Secrets Manager for your credentials is easy. This way, you minimize ops and keep costs low. If it’s just a side project, you could also consider Lightsail for an even cheaper option, but be aware there’s more management involved.
Honestly, consider getting a dedicated VPS from another provider instead. AWS can get pricey if you're not leveraging its scaling features. A dedicated server might be more cost-effective for your needs.
If you plan to run this long-term, use the AWS pricing calculator to get a good estimate. Look into Managed Workflows for Apache Airflow (MWAA), managed Grafana, and Sagemaker for your notebooks. It may seem pricier upfront but could save you effort and costs down the line.
ECS and Elastic Beanstalk support Docker Compose without issue. If you're using these services, you should be good to go!
+1 for that suggestion! Avoiding hyperscalers can save you a lot of cash in the long run.