I've been working as a full-stack developer for two years now, tackling JavaScript, React, and Python, alongside some machine learning and automation projects. Like many others, I rely on AI tools such as Cursor, Claude, and Copilot for streamlining my work. While these tools certainly enhance our speed, I've started seeing a lot of low-quality code creeping into our pull requests, which I like to call 'artifact cruft'. This junk includes things like unremoved console logs, redundant comments, unnecessary try/catch blocks, overly complex variable names, inconsistent coding styles, and hardcoded values, just to name a few. I've tried various methods to combat this—using ESLint and Prettier for catching syntax errors, bringing it up in standups, creating style guideline files, and more—but the issues persist. I'm curious about how other teams are handling these problems. Are there specific strategies or tools you've implemented to deal with low-quality AI-assisted code effectively?
4 Answers
You might want to look into SonarQube or similar tools that flag these kinds of issues automatically. It can take some of the burden off human reviewers, allowing you to focus on more substantial code quality problems instead of nit-picking issues.
Honestly, if the PR is bad, it’s best just to comment on it and reject it. Expect corrections. Some may find it slow or repetitive, but this is part of the job, and it’s crucial to maintain quality. It might feel like I’m being old-fashioned, but relying too much on automation can really backfire. Just get in there and do the review!
I totally agree! I’ve been searching for industry solutions, but sometimes the old ways are still the best.
I say stick to your guns and don’t outsource your coding to AI tools. They often function like junior developers who might know a few tricks but don’t truly understand the full picture, which leads to more problems than you realize.
I’m coming to that realization too. It feels like people are more inclined to let AI do the heavy lifting rather than engage with the code.
Communication is key! Regular check-ins with the team can help address issues on AI-generated code. Also, breaking changes into smaller push requests can help minimize the chaos. Just remember, you shouldn’t treat AI code as gospel—it needs review every time, because mistakes will happen, no matter what.

I’ve heard good things about SonarQube. Trying to set it up to see if it cuts down on the low-quality submissions.