I'm curious if anyone here is involved in deploying AI or machine learning workloads as part of their DevOps practices. What tools or projects are you working on, and how are you improving your skills in AI and ML?
5 Answers
I've been focusing on building Kubeflow workflows based on Jupyter notebooks to streamline model development and deployment. It's been quite interesting, but I feel like it’s just the tip of the iceberg when it comes to MLOps.
AI and ML have started helping me with learning a new tool for detecting IaC drift in Terraform. It learns patterns and provides suggestions, which is pretty neat. The best part? It's free!
We're currently using AI for monitoring and alerting at Okahu, specifically for anomaly detection in logs and predictive scaling. I’ve learned a lot from Andrew Ng's course, but honestly, most of my learning comes from trial and error during production fixes. I’ve also been experimenting with LLMs to generate Terraform configs from plain text, and it works well about 60% of the time, which isn’t terrible!
I've done some testing with AI tools aimed at students, and I've heard that enterprise-level tools are even more impressive. There's a lot of online resources about deploying AI agents locally. I enjoy the process of system design along with reviewing code and logic.
Absolutely! Many of us in DevOps are now rolling out AI and ML workloads. A lot of the upskilling happens through hands-on projects and learning MLOps fundamentals, like Docker, Kubernetes, MLflow, and Kubeflow, rather than diving deep into ML theory.

Related Questions
Biggest Problem With Suno AI Audio
How to Build a Custom GPT Journalist That Posts Directly to WordPress