I'm exploring AI-driven tools that can provide inline feedback on GitHub pull requests (PRs), enforce coding standards, and adapt based on team feedback. I've seen some examples like Cubic Dev. For those of you in web development, do you find these auto-generated reviews and change summaries helpful in your workflow, or do they just add unnecessary noise?
6 Answers
I tried using automated code reviews with Copilot but ended up turning it off. Out of six PRs I submitted, only one review caught a style issue I missed, while the others were either wrong or just not useful. I did find it helpful to have summaries, though!
Yeah, those summaries can be useful! I appreciate looking back at my code to validate my work, even if the comments aren't always accurate.
Honestly, I find them pretty useless for serious technical reviews! But I did discover that Gemini's code review can write some cute little poems based on the PR content, which is a fun touch.
Hey, don’t forget, SonarQube has been around for quite a while. It’s a reliable option for code quality checks!
I haven't had much success with tools like Copilot for code review. I find more value in manually asking Copilot chat to review my changes while guiding it with specific instructions to focus on what matters.
I actually use multiple AIs to check, test, and merge my code. Honestly, I'm not always sure if it works properly, but they all assure me everything's great!
These tools can be handy, but they're not perfect, especially with complex code. They tend to make a lot of mistakes.
I checked the review that worked, and it was from a PR where I refactored some AI-generated code. I forgot to remove a non-null assertion in that case, but the two other comments from the tool were completely off. Still, the summary feature was pretty valuable.