I've recently listened to a podcast featuring Harjot Gill and Corey Quinn discussing how AI is changing developers' expectations in code reviews. As someone who manages PR reviews for AWS projects like containers and CloudFormation, I've noticed that AI tools can significantly speed up spotting misconfigurations and identifying best practices. However, false positives can also be an issue. I'm curious if anyone here has used AI review agents for their AWS infrastructure code, including CDK, Terraform, or CloudFormation? While I haven't used them for infra code yet, I've had pretty good results with application code.
6 Answers
Could running some pre-commit hook scripts do similar checks?
We've been using CloudRabbit with Terraform, and overall, it’s better than doing nothing. It catches bugs that are hard to find and helps maintain our coding standards. That said, it can have off days where it ignores its own settings and goes a bit haywire.
After building some AWS infrastructure using Gemini and OpenTofu Terraform, I've learned that it’s key to provide guidance and iterate through changes. AI can work effectively for code reviews if you clearly outline concerns like security specs and performance criteria, but I wouldn’t skip human reviews entirely for critical infrastructure. AI is best as an assistant to flag specific areas that need attention.
I find Claude pretty decent for CDK reviews; it has flagged a few issues for me. Just keep in mind, there are false positives, but with experience, you can differentiate between real problems and what might just be standard 'best practices' overkill.
Cline has been great for reviewing CDK code and even helps me add Jest tests!
Totally! Pre-commit hooks can help highlight concerns and suggest necessary changes.