I've been using AI coding tools like Copilot, ChatGPT, and Cursor regularly for my coding tasks, and at first, everything seems to smooth out—quicker output, less boilerplate, and fewer obstacles. But after a few weeks, I start to notice some concerning patterns: logic seems to find its way into inappropriate places, files are interacting with layers they shouldn't touch, and small, one-off changes are piling up. While none of these issues are causing my builds to fail and linters aren't flagging anything, the overall feel of my codebase is becoming messier. Right now, I'm trying to handle this by catching issues during pull request reviews, leaving comments like "please move this to X", and planning to refactor later (though that often doesn't happen). I'm questioning if this is just how it is when using AI tools, or if I'm missing an obvious practice that could help. How do experienced teams prevent their architecture from drifting when incorporating AI?
1 Answer
Honestly, it sounds like you might be putting too much trust in the AI. If you’re not reviewing the generated code closely, it’s no surprise things are slipping through the cracks. Always double-check AI outputs—don't just let them compile and move on! You're still the captain of the ship here!

Totally agree! It's all about keeping your eye on the code. Don't just let tech do all the thinking.