Your team shipped 20 PRs last week. Sounds productive. But how long did each one take from first commit to running in production? If the answer is "I'm not sure," you're missing one of the most useful metrics in software engineering.
Cycle time in software engineering is the elapsed time between a developer's first commit on a piece of work and that code running in production. It's not about how fast people type. It's about how fast your system turns ideas into working software.

How Cycle Time Breaks Down
Cycle time isn't one monolithic number. It's the sum of several stages, each with its own bottlenecks:
- Coding time — first commit to PR open. How long a developer spends writing and self-reviewing before asking for feedback.
- Review wait time — PR open to first review. The idle time where code sits in a queue waiting for human attention.
- Review cycles — first review to approval. Back-and-forth on feedback, rework, and re-review.
- Merge to deploy — approval to production. CI pipelines, staging environments, deployment queues, and release trains.
The interesting part: for most teams, coding time is the smallest slice. The majority of cycle time is spent waiting — for reviews, for CI, for deploys. That's where the leverage is.
How to Measure It
You need timestamps at each stage boundary. Most teams pull this from their Git and CI data:
- First commit timestamp from Git
- PR opened timestamp from GitHub
- First review timestamp from GitHub
- PR merged timestamp from GitHub
- Deploy timestamp from your CI/CD system
Calculate the median across all PRs over a rolling window (two weeks works well). Median matters more than average here — a single PR that sat open over a holiday weekend shouldn't skew your picture.
What "Good" Looks Like
There's no universal benchmark, but DORA research gives us useful baselines:
- Elite teams: less than 1 day from commit to production
- High performers: 1 day to 1 week
- Medium performers: 1 week to 1 month
- Low performers: more than 1 month
If your median cycle time is under 48 hours, you're doing well. If it regularly exceeds a week, there's meaningful time being lost somewhere in the pipeline.

Where Teams Actually Lose Time
Let's walk through each stage and the patterns that slow teams down.
Review Wait Time
This is the single biggest source of wasted cycle time for most teams. A PR opens at 2pm, the reviewer is deep in their own work, and the review doesn't happen until the next morning. That's 18 hours of dead time on what might be a 200-line change.
Common causes:
- No clear expectations around review SLAs
- Reviewers aren't notified promptly (or notifications get buried)
- PRs are too large to review in one sitting, so reviewers procrastinate
- Uneven review load — one or two people review everything
Review Cycles
Even after the first review lands, PRs can bounce back and forth for days. Each round trip adds context-switching cost for both the author and reviewer.
Common causes:
- Vague or nitpicky feedback that triggers multiple rounds
- No team alignment on code standards (every review becomes a style debate)
- Authors don't respond to feedback promptly — same notification problem, different direction
CI Pipeline Duration
A 30-minute CI pipeline doesn't just cost 30 minutes. It means a developer pushes a fix, switches to something else, forgets about the PR, and comes back to it hours later. Slow CI multiplies wait time.
Common causes:
- Running the full test suite on every push instead of affected tests
- No caching for dependencies or build artifacts
- Flaky tests that require re-runs
- Sequential steps that could run in parallel
Deployment Queues
Some teams have fast review cycles but only deploy once a day — or once a week. That last mile can silently add days to your cycle time.
Common causes:
- Manual deployment processes that require a specific person
- Release trains or batch deploys on a fixed schedule
- Staging environment contention (one team blocks another)
- Change approval boards that meet weekly
Practical Ways to Improve Each Stage
Cut Review Wait Time
- Set a review SLA. "All PRs get a first review within 4 business hours" is a reasonable starting point. Make it explicit.
- Automate review assignments. Use CODEOWNERS or round-robin assignment so PRs don't sit unowned.
- Send smart notifications. Don't rely on GitHub's default email notifications — they get buried. Surface pending reviews where your team already works.
- Keep PRs small. PRs under 200 lines get reviewed 3x faster than PRs over 500 lines. This is the single highest-leverage habit change.
Reduce Review Cycles
- Adopt a style guide and enforce it with linters. If a machine can catch it, a human shouldn't have to comment on it.
- Write clear PR descriptions. Explain the why, link the ticket, call out areas where you want focused feedback. Reviewers move faster when they have context.
- Distinguish blocking from non-blocking feedback. Use labels like "nit" or "optional" so authors know what actually needs to change before merge.
Speed Up CI
- Profile your pipeline. Find the slowest steps and optimize or parallelize them.
- Cache aggressively. Dependencies, Docker layers, build artifacts — anything that doesn't change between runs.
- Run only affected tests on PR pushes. Save the full suite for the merge to main.
- Fix or delete flaky tests. A test that fails randomly 5% of the time is worse than no test at all — it erodes trust in the entire suite.
Unblock Deployments
- Automate deploys on merge. If your tests pass and the PR is approved, it should go to production without human intervention.
- Deploy smaller batches more frequently. Smaller deploys are safer and easier to roll back.
- Eliminate environment contention with ephemeral staging environments or feature flags.

Start With Visibility
You can't improve what you can't see. The first step is simply making cycle time visible — broken down by stage — so your team can have informed conversations about where time goes. Most teams are surprised by the results. They assume coding is the bottleneck when it's actually the gaps between steps that eat most of the calendar time.
Track it, talk about it in retros, and chip away at the biggest bottleneck first. Small improvements compound fast when you're removing days of idle time from every PR.
If you want automated cycle time visibility and smart nudges to keep PRs moving through your pipeline, Revvie plugs into GitHub and Slack to surface exactly where time is being lost — and helps your team get it back.