Hey everyone! I'm curious about the real-world challenges developers face when working with AI coding tools like Copilot and Claude in medium to large production codebases. Specifically, I'd love to hear about: 1. The frustrations you encounter when these tools operate across multiple files. 2. Whether you fully trust AI-generated refactors in your projects, and if not, why? 3. Any hidden issues you've experienced from AI suggestions that showed up later. 4. If AI actually reduces your code review time, or if it makes it longer. 5. The toughest parts of maintaining a large repository that AI still struggles with. I'm looking for practical insights from developers who are in the trenches, not just hot takes. Thanks!
1 Answer
I've noticed that AI tools often don't follow the coding conventions we've built over the years. They tend to look at a few files for context, but when dealing with legacy code that's 20 years old, no AI can hold all that complexity. So, while it can generate code, it often breaks things because it lacks the deeper understanding of the project framework.

I totally relate! We have this old app that's a mess, and my boss thinks asking Claude to add features will just work. But it ends up pulling in random stuff and creating more bugs, making it way more time-consuming to test everything.