I've been using AI assistants for a while now, and while they can generate boilerplate code and complex algorithms in no time, they struggle to grasp the specifics of my team's project. For instance, they often suggest public libraries when we already have perfectly good internal alternatives, or they write code that goes against our established architectural patterns. They can't seem to answer basic questions that require project context, like why we built our authentication service in a certain way or how to properly add a new event to the analytics pipeline. Essentially, I feel like I spend more time correcting their suggestions than I do on actual coding. How do others bridge the gap between the generic knowledge of AI and the specific needs of their projects?
1 Answer
You've hit the nail on the head! Many teams are realizing that AI-generated code can lead to a lot of rework. It's true that LLMs don't really understand project context. If you need that kind of insight, what you really want is a senior developer who knows the ropes. AI just doesn't have that level of understanding.

Totally get that! It’s often frustrating to see AI toss out suggestions that just miss the mark.