I'm curious if anyone here is actually using tools like ScaleOps or CastAI to automatically adjust pod requests. I've heard that less than 1% of teams utilize these tools, which seems strange to me, especially since they use LLMs to determine new requests. If they're designed to be safe, then why is there such low adoption? Is it really about trust issues, or am I missing some important aspect?
3 Answers
Honestly, it sounds like a risky move. Just because LLMs are smart, doesn’t mean they’re foolproof. Plus, I've seen people ask similar questions, and it seems like there's a lot of skepticism about whether they can actually predict demands accurately. Are they just trying to sell us on a fancy idea?
I get where you're coming from, but relying on LLMs to manage CPU and memory usage doesn’t sound safe to me at all. It’s like asking a toaster to handle rocket science! There are just too many variables involved in resource management for that to be a good idea.
Isn't this basically what the Vertical Pod Autoscaler does? It automatically adjusts requests based on usage metrics. I wonder if people just find that tool more straightforward or reliable. Better the devil you know, right?
Related Questions
Can't Load PhpMyadmin On After Server Update
Redirect www to non-www in Apache Conf
How To Check If Your SSL Cert Is SHA 1
Windows TrackPad Gestures