I'm genuinely curious about how teams are handling code reviews in Azure DevOps. We're currently facing some challenges in our process where we require at least two reviewers for every pull request (PR), and the reviews can take anywhere from one to three days. Often, we find ourselves stuck in discussions over naming conventions and formatting, while actual bugs make their way to production. I'm wondering if this is a common issue or if we're doing something wrong. How do other teams manage their reviews? How many reviewers do you typically have, and how long do PRs usually sit before being approved? What does your checklist look like for reviews? Are there any Azure DevOps extensions or tools you're using to streamline this? We're considering implementing automated tools to tackle some of the repetitive tasks, allowing us to focus more on the important aspects like logic and architecture.
5 Answers
It sounds like automating formatting through linters and establishing naming conventions could help your team. If you don't have a solid testing strategy in place, that might explain why bugs are slipping through. Code reviews should ideally be quick, and using DORA metrics can help you measure your team's performance and efficiency.
I think you might be expecting too much from your reviewers. Automating formatting can ease the pressure, and settling on naming conventions should just be a one-time decision. Your reviews might just end up being a final check for any major mistakes or security risks before deploying.
Absolutely! Tools like SonarCloud can help catch issues early, allowing reviewers to concentrate on more significant problems. High-quality reviews don’t have to take days if the scope is clearly defined.
Just out of curiosity, how large is your team? Are there enough people available for the code reviews?
There are about 25 of us, so we should have enough hands on deck for reviewing.
Two reviewers is a decent number, but waiting several days for feedback is definitely not okay. This usually means the reviews are getting pushed to the back burner instead of being integrated into the workflow. When reviews take too long, it often leads to losing context, which is when the nitpicking starts following along with real issues being overlooked. Setting clear deadlines for reviews and limiting the scope of PRs can significantly speed things up.
Couldn't agree more! The key is for reviewers to focus on the actual logic and functionality rather than getting bogged down by minor details. If PRs are smaller, it's easier to maintain the focus.
It seems like the issue lies more in your team's dynamics than with the tools. If the process isn't solid, automating it won't fix the underlying problems. Get everyone on the same page regarding the purpose of code reviews; this can help eliminate those arguments about formatting and style. The focus should really be on whether the changes meet the required standards and don't introduce any security risks.
For sure! When everyone understands the real goal of a review, it shifts the focus away from personal preferences and helps streamline the process. Without setting those clear standards, it's hard to evaluate what's truly necessary in a review.

Totally agree! Automation should handle the consistency aspect of coding, while reviewers should focus on the intent and potential risks. It's also easy to misjudge the effectiveness of linters and tests. Not everything is caught by them, and that's where discussions can drag on.