DORA metrics have become the gold standard for measuring software delivery performance. If you've read Accelerate or sat through any engineering leadership talk in the last five years, you've heard the pitch: track four metrics, benchmark against elite performers, and watch your org improve.
That advice works great if you're running 200 engineers across a dozen teams. But if you're a 5-15 person team, DORA metrics can actively mislead you — and the overhead of tracking them properly might not be worth it.
Here's what to measure instead.

A Quick Refresher on DORA Metrics
DORA (DevOps Research and Assessment) defines four key metrics for software delivery:
- Deployment frequency — how often you ship to production
- Lead time for changes — time from commit to production
- Change failure rate — percentage of deployments that cause failures
- Mean time to recovery (MTTR) — how fast you recover from failures
These metrics emerged from years of research across thousands of organizations. They correlate strongly with both organizational performance and developer satisfaction — at scale.
The key phrase there is at scale.
Where DORA Breaks Down for Small Teams
The overhead isn't justified
Tracking DORA metrics properly requires instrumentation. You need deployment event tracking, incident classification systems, and a way to tie commits to production releases. For a team of six, building and maintaining that infrastructure is a real cost — and it's time not spent shipping product.
Most small teams that "track DORA" are actually eyeballing numbers from their CI dashboard. That's not measurement, it's guessing.
The signals get noisy
On a small team, one person going on vacation skews everything. A single complex feature branch can tank your deployment frequency for a month. A bad deploy by an intern doubles your change failure rate overnight.
DORA metrics assume enough volume to produce meaningful trends. When you're deploying 3-8 times a week with five engineers, the sample size is too small for the numbers to tell you anything you don't already know from your daily standup.
They miss what matters most: collaboration
Here's the biggest gap. DORA metrics measure your pipeline — commit to production. They tell you nothing about what happens between "developer opens PR" and "code gets approved." For small teams, that's where most of the time goes.
A five-person team doesn't have a deployment problem. They have a "nobody reviewed my PR for two days" problem. They have a "this PR has been open for a week because the only person who knows this codebase is swamped" problem.
DORA doesn't see any of that.

What Small Teams Should Measure Instead
You don't need four carefully instrumented metrics. You need three or four numbers you can check weekly that reflect how your team actually works together.
Review time
Track two things: time to first review and time from open to merge. These are the highest-signal metrics for a small team because they capture the collaboration bottleneck directly.
If your average time to first review is over four hours, PRs are sitting idle while context decays. That's your biggest lever for shipping faster — not deployment frequency.
PR throughput
How many PRs does your team merge per week? This is a rough but useful proxy for shipping velocity that requires zero instrumentation. Just count them.
Unlike deployment frequency, PR throughput captures work at the unit where developers actually think about it. A single deploy might contain five PRs or one. Throughput tells you whether the team is moving.
Review completion rate
What percentage of requested reviews actually get completed within 24 hours? This surfaces review bottlenecks before they become blockers.
If one engineer is requested on 60% of all reviews and only completing half on time, you've found your constraint. No DORA metric would surface that.
Cycle time (simplified)
Track time from first commit on a branch to merge. Skip the "to production" part — for most small teams, merging to main and deploying are either the same thing or separated by minutes. Adding deployment tracking just to measure the last mile isn't worth it.
How to Start Simple
Don't build a metrics platform. Don't buy a DORA dashboard. Start here:
- Pick two metrics from the list above. Review time and PR throughput are the best starting pair.
- Check them weekly. Monday morning, spend five minutes looking at last week's numbers. That's it.
- Set one threshold, not a target. Example: "No PR should sit unreviewed for more than 8 business hours." React when you cross the threshold. Ignore normal variation.
- Talk about the numbers in retros. The value isn't the metric — it's the conversation it starts. "Why did three PRs take over two days to merge last week?" leads to real improvement.
- Automate the boring part. Use your GitHub data — it already tracks everything you need for review time, throughput, and cycle time. You just need something to surface it.

The Point Isn't Anti-DORA
DORA metrics are legitimate and well-researched. If your team grows past 20-30 engineers, you should absolutely look at them. They solve real problems at that scale — aligning multiple teams, benchmarking across the org, and identifying systemic bottlenecks in delivery pipelines.
But for a small team, the highest-leverage thing you can measure is how well you collaborate on code — and that means focusing on the review process, not the deployment pipeline.
Start with review time. Watch it weekly. Fix what it reveals. You'll ship faster than any DORA dashboard could help you.
Revvie tracks review time, PR throughput, and cycle time automatically from your GitHub data — and nudges your team in Slack when PRs need attention.