I've been working on syncing some important engineering performance metrics like PR review times, cycle times, and throughput into a dashboard that we host on Monday Dev. I'm really intrigued to hear about how others are doing this. For example, do you pull data from GitHub Actions or use custom scripts? Also, how do you manage to keep your data from becoming overwhelming while still being able to identify real bottlenecks?
9 Answers
Honestly, we don't track everything meticulously because we have a capable management team. During my time as a manager, I found focusing on the team's vibe was often more effective than crunching numbers since metrics can be misleading.
We regularly assess DORA metrics, which feels real-time enough for us to gauge performance.
If you're using GitLab, they actually have some built-in metrics that could help with your tracking needs.
For detailed performance metrics, Grafana can integrate with GitHub to show PR review times and the duration of open pull requests. It's crucial to find a balance though; if builds take an eternity, that needs addressing. Regular standups and monitoring blockers are also great for keeping tabs on overall performance.
We rely on DORA metrics from our internal developer platform, which helps the team recognize bottlenecks without management interference. We've pinpointed that PR reviews are notably slowing down our process, and the team collaborated to address this organically. It's all about understanding that when you make measurement a goal, it can skew the results.
That makes sense! How did you manage to tackle the PR review issue?
Sounds like a great environment for problem-solving!
Consider using integrations that activate metrics tracking only when necessary. Overloading on metrics can lead to drowning in noise—focus on critical flows like deploy times, code quality, and use real-time dashboards that filter out unimportant data.
We've found success tracking internal SLOs and using burn-down charts to manage tasks for each sprint. It doesn't take much time and is manageable with our small team.
Do you really need those metrics in real time? Most teams I know review performance on a quarterly basis or during sprints rather than continuously.
Honestly, too many metrics can be counterproductive. Instead of tracking everything, I’d recommend feeding GitHub metrics into Grafana with Prometheus, focusing mainly on cycle time and deployment frequency.
That's a refreshing approach! I've noticed metrics can sometimes just add confusion.