You open your engineering metrics dashboard on Monday morning. Deployment frequency looks solid — you shipped 14 times last week. Change failure rate is low. MTTR is under an hour. Everything looks green.
But your team feels slow. Features that should take two days take a week. Developers are frustrated. Sprint commitments keep slipping. The dashboard says you're fine. Your team says otherwise.
The metric you're probably not tracking — or not tracking well enough — is PR review time. And it might be the single most predictive indicator of how fast and how well your team actually ships.

What PR Review Time Actually Measures
PR review time breaks down into two distinct measurements:
- Time to first review: how long from when a developer opens a PR to when another human looks at it
- Time to merge: how long from PR open to the code landing in your main branch
Both matter, but time to first review is the leading indicator. It tells you how long work sits idle before anyone engages with it. Time to merge captures the full review cycle — first review, feedback rounds, approval, and merge.
Together, they paint a picture of your team's review culture and collaboration health that no other metric captures as directly.
The Review Time Tax
Here's the thing most teams underestimate: every hour a PR sits waiting costs more than an hour of engineering time.
When a developer opens a PR and nobody reviews it for six hours, here's what actually happens:
- The author context-switches to a new task. They load an entirely different problem into working memory.
- When the review finally comes back with comments, they have to drop their current work, re-load the context of the original PR, and address feedback.
- If the PR has been open long enough, the base branch has moved. Now there are merge conflicts to resolve before the reviewer can even look at the updated code.
- The reviewer has to re-review, which takes longer because the diff has changed. Sometimes this triggers another round of feedback.
A PR that could have been a 30-minute review-and-merge cycle on the same morning it was opened turns into a multi-day affair spread across three or four context switches. Multiply that by every PR your team opens, and you start to see where the weeks go.
Context switching is the silent killer. Research from Microsoft suggests that developers need 10-15 minutes to regain deep focus after an interruption. A PR sitting in review doesn't just block that PR — it fragments the author's productivity on everything else they touch while waiting.

What Good Looks Like
Benchmarks vary by team size and codebase complexity, but here are reasonable targets for most product engineering teams:
- Time to first review: under 4 hours during business hours. This means if a PR is opened at 10am, someone has looked at it by 2pm. For many high-performing teams, this is under 2 hours.
- Time to merge: under 24 hours for standard PRs. Larger architectural changes might take longer, but the median should land well within a business day.
- Review rounds: 1-2 on average. If you're consistently hitting 3+ rounds of review, the problem is likely upstream — unclear requirements, PRs that are too large, or misalignment on approach before code was written.
If your time to first review is measured in days rather than hours, you have a review culture problem. That's not a judgment — it's an opportunity. Review time responds quickly to attention and process changes.
How to Start Tracking It
You don't need expensive tooling to start. Here's a practical path:
Week 1: Get the raw data. Pull PR timestamps from the GitHub API. You need created_at, the timestamp of the first review event, and merged_at. A simple script can calculate the deltas and dump them into a spreadsheet.
Week 2: Establish your baseline. Look at the last 30 days. What's your median time to first review? Median time to merge? Don't look at averages — a single PR that sat open for two weeks will skew everything. Medians give you the true picture of what a typical PR experiences.
Week 3: Share with the team. Put the numbers in front of your team without blame. "Our median time to first review is 11 hours. That means most PRs sit for more than half a business day before anyone looks at them. What do we think about that?" Let the team own the conversation.
Ongoing: Review weekly. Add PR review time to your weekly team sync or retro. Track the trend over time. Celebrate improvements. Investigate spikes.
The key is making the metric visible. Most teams don't have slow reviews because they don't care — they have slow reviews because nobody is watching.

The Compounding Effect
When review time improves, everything downstream improves:
- Cycle time drops because code spends less time in limbo
- PR size shrinks because developers learn that smaller PRs get reviewed faster
- Merge conflicts decrease because code lands before the base branch drifts
- Developer satisfaction improves because there's less frustration and more flow
- Knowledge sharing increases because reviews happen while the code is fresh for everyone
It's one of the rare metrics where improving the number directly improves the team's lived experience. Nobody likes waiting. Nobody likes context switching. Faster reviews make everyone's day better.
Start Watching the Number
PR review time won't tell you everything about your engineering organization. But it will tell you something that deployment frequency, velocity, and most other popular metrics won't: how well your team collaborates in real time.
If you want to skip the API scripting and dashboard building, tools like Revvie can surface review time data directly in Slack — making it visible to the team without adding another dashboard to check. But however you get there, the important thing is to start paying attention to the number. It's probably telling you something you need to hear.