I'm interested in how others are tracking engineering performance metrics in real-time. My team is using a central dashboard on monday.dev to keep an eye on PR review times, cycle times, and overall throughput. I've heard that some people pull data directly from GitHub Actions or utilize custom scripts. How do you manage to avoid drowning in numbers while still effectively spotting actual bottlenecks?
9 Answers
GitLab has built-in tools for tracking these metrics, which can be really handy for teams looking to streamline their processes.
For tracking performance bottlenecks, Grafana integrates with GitHub to pull metrics like PR open times. For the builds and tests, it's often managed by the team based on their reports. Honestly, if any build takes longer than 20 minutes, it's worth re-evaluating! Standups can also help identify if someone is falling behind in their tasks.
We monitor DORA metrics as part of our real-time strategy, and it works well for us!
We keep track of internal SLOs and use burndown charts for sprint tasks. Since our teams are small, gathering data for board presentations only takes me about an hour each month.
I had a great CTO who reminded us that when you make measurement the goal, it ceases to be a true measure. We pull DORA metrics through our internal DevOps platform, which really helps teams identify bottlenecks. For us, PR reviews were a big hurdle, so we facilitated a team discussion to resolve that organically. How do you all tackle PR review challenges?
What made you choose that internal platform over others like Cortex?
I agree, too many dashboards can be counterproductive. Keep it simple by routing GitHub metrics into Grafana with Prometheus to track cycle times and deployment frequencies—don't get distracted by vanity metrics.
Honestly, we don't rely on heavy metrics—we have competent management! I found it more effective to gauge how things are going by feeling the vibe in the team rather than getting bogged down with noisy metrics. Keeping in touch with developers provides much clearer insights.
I totally relate! I also struggled with metrics management; they always felt too random to reflect reality.
I suggest using integrations that trigger only when important metrics change—we use GitHub Actions and custom scripts but avoid clutter. Focus on key metrics like deploy times and incident response rather than tracking every little detail. Consider using a filtered real-time dashboard to highlight significant issues.
But do you really need metrics in real-time? Most teams review metrics quarterly or monthly, not necessarily on a continuous basis.

That's interesting! Have you found any strategies that work particularly well to resolve PR review delays?