I've been mulling over the challenges of multi-cloud architecture, particularly when it comes to managing costs effectively. While a lot of talk centers around dynamically moving workloads to where compute is cheapest, I wonder if anyone outside of large hedge funds and tech giants is actually putting mathematical models into practice for cost arbitrage.
I mean, we all know that Terraform helps with provisioning but let's face it, it doesn't unify the cloud; it just gives us different tools for different providers. For instance, we still have to deal with providers like AWS and GCP separately with their distinct APIs. We've got FinOps tools that suggest which cloud service is cost-effective, like "GCP compute is cheaper than AWS," but they often neglect critical factors like Data Gravity.
If you consider moving an EC2 instance to GCP to save costs, you might be overlooking the expensive egress fees if your database is still in S3. Essentially, I'm interested in whether anyone is taking a more analytical approach to this problem. I've got ideas about treating multi-cloud as a physics problem, focusing on things like cost as friction and utilizing algorithms like Dijkstra's for calculating costs accurately. Is anyone actually tackling this before hitting 'terraform apply', or do most just accept high egress fees as part of the landscape? I'd love to hear how the experts in FinOps and DevOps handle this in practice!
1 Answer
Hey! There are indeed companies out there that provide tools for optimizing cloud spending, but a lot of them focus on just one provider instead of facilitating migration between services. If you're racking up a hefty bill, like $100K/month, sure, you can find help to save costs. But Terraform itself doesn't offer much in terms of that inter-cloud analysis you're talking about.

Is that tool Aviatrix by any chance?