I'm curious if anyone here is using tools like ScaleOps and CastAI to automatically change pod requests in Kubernetes. I've heard that less than 1% of teams employ this, and I find that hard to believe. These tools supposedly use LLMs to determine new requests, which I thought made it a safe choice. So, what's holding folks back? Is it a lack of trust in the technology, or is there something more to it?
5 Answers
Let's be real: these tools likely stick to pure math for adjusting requests, so using an LLM here seems excessive. It’s all about finding the right tool for the job.
Honestly, trusting an LLM to handle production is a gamble. If something goes sideways, you're the one who pays the price, not the AI.
You think relying on LLMs is totally safe? That's a bit naive. These systems mess up quite a bit.
We evaluated CastAI at my company, and while it performed well, it’s more about math than AI for rightsizing.
I looked into using a vertical pod autoscaler, but I’d love to hear where this less than 1% figure comes from. It seems low.
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures