I'm considering implementing KEDA for event-driven scaling to reduce idle pod expenses. If you've deployed it in a real production setup, did you notice any significant cost savings, or did it just add more operational hassle?
5 Answers
KEDA can be great for autoscaling without much operational overhead since it works with the existing HPA in Kubernetes. If you're concerned about idle costs due to low traffic, looking into right-sizing your pods and enabling scale-to-zero might be a better approach.
We use KEDA for our pod scaling, and while it helps with spiky traffic, it isn't a huge deal for steady workloads. If your traffic fluctuates a lot, it might be worth considering, but don't expect it to be a game changer for consistent workloads.
It’s super effective for scaling pods that consume from SQS. It can scale down to zero when there’s no queue and ramp back up when there’s traffic. That's where the real savings come from!
Absolutely! In our case, we've set it up to auto-off our QA environments after testing and let our worker pods scale down to zero when they aren’t needed, which has saved costs significantly.
We're using KEDA with our Kafka queues, and it’s interesting. We see lots of scaling up and down as the workload changes, but ultimately, all our pods end up on one big node, so our costs haven’t really gone down. It’s definitely effective for bursts, but you need to monitor how it interacts with your nodes.
True! KEDA manages the bursts well, but if the node scaling doesn't happen, you might not see the savings you’re hoping for.