I've been diving into AWS setups, and one thing that keeps coming up for me is the concept of continuous optimization. Everyone seems to think it should be an ongoing effort, but in practice, it often turns into a big cleanup every few months or just reacting to cost spikes whenever they occur. Tools like Cost Explorer, Compute Optimizer, and Trusted Advisor are great for providing data, but consistently acting on that information seems like a challenge. I struggle with knowing what changes are safe to make, understanding their potential impacts, and balancing cost against performance risks. How are others managing to keep AWS optimization continuous, or does it end up being more of an ad hoc process?
1 Answer
One effective strategy is to directly tie AWS costs to the departments using the services. When teams see the cost impacting their budgets, they tend to pay more attention. What you're describing is common, but in between optimization cycles, I focus on strategic planning, like consolidating load balancers and applying retention policies. To maintain control, I enforce a strict no-console-access policy, using Infrastructure as Code (IaC) tools like Terraform to manage everything. This way, you can systematically manage and eventually calm things down, making it easier to implement cost-impact tools.

That setup sounds solid! It seems like you've managed to prevent a lot of waste from the get-go. Do you still find yourself revisiting things like sizing frequently or does it stabilize after those initial adjustments?