Project Management Metrics That Actually Matter

A practical guide to measuring what drives project success — and ignoring what doesn't.


1. Why Metrics Matter

Every project tells a story. The question is whether you're reading it or guessing.

Most project managers rely on gut feeling, status meetings, and the occasional Gantt chart to understand how their project is going. That works — until it doesn't. The project that "felt fine" turns out to be three weeks behind. The team that "seemed productive" was actually context-switching between too many items.

Metrics don't replace judgment. They inform it. A good metric tells you something you didn't already know, points to a possible cause, and suggests what to try next. A bad metric gives you a number that feels important but doesn't lead to action.

This guide covers ten metrics that lead to action. For each one, we explain:

  • What it is — the concept, in plain language
  • How to calculate it — the actual formula
  • What good looks like — benchmarks from published research
  • What it tells you — the insight behind the number
  • What to do about it — practical suggestions when the number is off

We also cover what NOT to measure — because some popular metrics actively mislead.

2. The Metrics Framework

Before diving into individual metrics, it helps to understand how work flows through a project.

Most project management follows a pattern, whether you call it agile, waterfall, or "the way we do things here." Work items — tasks, stories, tickets, whatever your team calls them — move through a series of stages:

Created → Refined → Planned → In Progress → (Waiting) → Done → Closed

Not every item goes through every stage. Some skip straight from "created" to "in progress." Some bounce back from "in progress" to "waiting" multiple times. Some get cancelled halfway through. That's normal — and it's exactly what metrics help you see.

Category What it measures When it's useful
FlowHow smoothly work moves through the systemAlways — these are universal
SprintHow well time-boxed planning worksWhen you use sprints or iterations
BacklogThe health of your upcoming workAlways — every project has a backlog
AgingHow long items have been stuckAlways — reveals hidden problems

3. Flow Metrics

Flow metrics measure how work moves through your project. They don't care whether you use sprints, Kanban, or pen and paper. They work for any team, any methodology, any project size.

Cycle Time

What it is

Cycle time is the elapsed time from when someone starts working on an item to when it's done. It includes all the time the item spends "in the system" — active work, waiting for review, blocked by a dependency, sitting in a queue.

Think of it like a restaurant: cycle time is the duration from when the kitchen starts preparing your meal to when it arrives at your table. If the chef works for 10 minutes but the plate sits under the heat lamp for 20 minutes waiting for a server, the cycle time is 30 minutes.

How to calculate it

Cycle Time = Date item marked "Done" − Date item entered "In Progress"

For a meaningful benchmark, use the 85th percentile across all completed items — not the average. The average is skewed by outliers. The 85th percentile tells you: "85% of our work items finish within X days." That's a reliable commitment you can make.

Benchmarks

RatingCycle Time (85th pct)Source
Good3 days or lessKanban University, Daniel Vacanti
Warning4–7 days
CriticalMore than 7 days

These benchmarks assume work items are properly sized. If your items regularly take more than 5 days, the items are probably too large — not the team too slow.

What it tells you

Short cycle time means your team delivers value frequently. Each completed item is a learning opportunity — the team sees the result, the customer gives feedback, and the next item benefits.

Long cycle time means items pile up in the system. Each one carries mental overhead, and feedback comes too late to be useful.

What to do when cycle time is too long

  • Check WIP. Too many items in progress simultaneously is the #1 cause of long cycle times.
  • Split items smaller. If an item takes 7 days, it probably contains 2–3 smaller deliverables.
  • Look for waiting time. If an item is "in progress" for 5 days but only actively worked on for 1 day, the problem isn't speed — it's blocking and waiting.

Lead Time

What it is

Lead time is broader than cycle time. It measures the elapsed time from when work becomes "ready" — refined, understood, and ready to be picked up — to when it's done.

The distinction matters: cycle time starts when work begins; lead time starts when work could begin. The gap between the two is queue time — how long refined items sit waiting before someone starts them.

Lead Time = Date item marked "Done" − Date item marked "Ready"

Why start at "Ready" and not at "Created"? Because the time between creation and refinement is a different process entirely. An item might sit in "created" status for weeks simply because it's a future idea, not because anything is wrong. But once an item is "ready," the clock is ticking — someone invested effort to refine it, and the team expects it to be done soon.

What it tells you

If lead time is much longer than cycle time, your team has a queuing problem: items are refined and ready, but nobody picks them up. If lead time is close to cycle time, your process flows well.

Flow Efficiency

What it is

Flow efficiency is the ratio of active work time to total cycle time. It answers: "Of all the time an item spent in our system, how much of it was actual work?"

Flow Efficiency = Active Work Time / Cycle Time × 100%

For example, if an item has a cycle time of 5 days, but the team actively worked on it for 2 days (the other 3 days it was waiting for review, blocked, or in a queue), the flow efficiency is 40%.

Benchmarks

RatingFlow EfficiencySource
Typical10–15%Modig & Ahlstrom, "This is Lean"
Good15–25%
Very Good25–40%
ExcellentAbove 40%

Yes, you read that right. Most organisations have a flow efficiency of only 10–15%. That means 85–90% of the time, work items are waiting, not being worked on. This is the single biggest opportunity for improvement in most projects.

What to do when flow efficiency is low

  • Reduce handoffs. Every time an item moves from one person or team to another, it enters a queue.
  • Make blockers visible. Track why items sit in "waiting" — is it always the same dependency?
  • Reduce WIP. Fewer items in progress means less queuing and faster flow.

Work in Progress (WIP)

What it is

Work in Progress (WIP) is the number of items currently being worked on simultaneously. It's the simplest metric to measure and one of the most powerful to manage.

WIP = Count of items currently in "In Progress" status

Benchmarks

RatingWIP (per person)Source
Good1–2 itemsWeinberg, "Quality Software Management"
Warning3–4 items
Critical5 or more

Gerald Weinberg's research on context-switching is sobering:

Simultaneous itemsProductive time per itemTime lost
1100%0%
240% each20%
320% each40%
55% each75%

With 5 items in progress, you lose 75% of your productive time just switching between them.

What to do when WIP is too high

  • Set a WIP limit. Agree on a maximum number of items in progress. When the limit is reached, finish something before starting something new.
  • Say "not yet" more often. Stakeholders will always want more things started. Your job is to finish things, not start them.
  • Finish before starting. This sounds obvious. It is not practiced nearly enough.

Waiting Time

What it is

Waiting time is the total time items spend in a "waiting" or "blocked" status during their lifecycle.

Waiting Time % = Time in "Waiting" status / Cycle Time × 100%

Benchmarks

RatingWaiting Time %Source
ExcellentLess than 10%Reinertsen, "Principles of Product Dev Flow"
Good10–15%
Warning15–30%
CriticalMore than 30%

What to do when waiting time is too high

  • Track the reason. Not just "waiting" — waiting for what? Categorise and address the most frequent cause.
  • Reduce external dependencies. Can you restructure to reduce handoffs between teams?
  • Empower decision-making. If items wait for management approval, can the team decide more independently?

4. Sprint Metrics

Sprint metrics only apply if your team works in sprints — fixed time-boxes (typically 1–4 weeks) where you plan a set of work and aim to complete it. If you don't use sprints, skip to Backlog Metrics.

Sprint Completion Rate

Sprint Completion Rate = Items completed / Items planned × 100%
RatingCompletion RateSource
Good80–90%Scrum Guide; Mike Cohn
Acceptable70–79%
Warning60–69%
CriticalBelow 60%
Also flagAbove 95% consistentlyMay indicate under-commitment

Note the last row: a team that always completes 100% is probably not planning enough work. They're playing it safe instead of pushing towards a meaningful goal.

What to do when completion rate is too low

  • Plan less. If you consistently finish 60%, plan 60% of what you used to. Then improve from there.
  • Track disruptions. How much unplanned work came in? From whom? Why?
  • Protect the sprint. Once the sprint starts, new work should be the exception, not the rule.

Velocity and Throughput

Throughput is the number of items completed per unit of time — usually "items per week." We deliberately avoid using story points as a velocity measure (see What NOT to Measure). Instead, we count completed items.

Throughput = Number of items completed per week

There's no universal "good" throughput — it depends on team size and item complexity. Instead, track variance: how stable is throughput over time?

RatingVariance (sprint-to-sprint)Source
GoodWithin ±15%Scrum.org
Warning±25%
CriticalMore than ±40%

Stable throughput is more valuable than high throughput. A team that consistently delivers 8 items per sprint is more predictable than one that swings between 3 and 15.

Scope Change

Scope Change % = Items added after sprint start / Items planned at sprint start × 100%
RatingScope ChangeSource
Good10% or lessPMI, "Pulse of the Profession"
Warning10–20%
CriticalMore than 20%

If more than 20% of work in every sprint is unplanned, you don't really have sprints — you have a list of intentions that gets overridden by reality.

What to do when scope change is too high

  • Track where it comes from. Always the same stakeholder? The same type of work?
  • Create a buffer. Plan for 80% capacity and keep 20% for unplanned work.
  • Negotiate, don't absorb. When new work comes in mid-sprint, something else must come out.

5. Backlog Metrics

Your backlog is your inventory of future work. Like physical inventory, it has a carrying cost: items need maintenance, and old items silently expire without anyone noticing.

Backlog Size

RatingBacklog SizeSource
GoodFewer than 60 itemsRoman Pichler; Scrum Guide
Warning60–100 items
CriticalMore than 100 items

A large backlog isn't a sign of ambition — it's a sign of indecision. Nobody can prioritise 200 items effectively. The bottom 100 will never get done.

What to do when backlog is too large

  • Archive ruthlessly. Anything untouched for 90+ days goes to an archive.
  • Set a limit. "Our backlog has a maximum of 50 items" forces real prioritisation.
  • Refine less, decide more. Don't refine 100 items. Decide which 20 matter now.

Stale Items

Stale Items % = Items not updated in 30+ days / Total backlog items × 100%
RatingStale ItemsSource
GoodLess than 10%Atlassian Agile Coach
Warning10–25%
CriticalMore than 25%

Stale items are zombie items — they're technically alive but nobody is paying attention to them. They clutter the backlog and make prioritisation harder.

Cancellation Rate

Cancellation Rate = Cancelled items / Total items created × 100%

Measured over the lifetime of the project, from start to now.

RatingCancellation RateSource
Good10% or lessStandish Group, CHAOS Reports
Acceptable11–15%
Warning16–25%
CriticalMore than 25%

Some cancellation is healthy — it means you're learning and adjusting. But above 20% suggests items are created without sufficient thought or priorities change too frequently.

6. Aging Metrics

Aging metrics flag items that have been in an active state for too long. They're your early warning system for work that's stuck, forgotten, or silently blocked.

Story Aging

RatingStory Age (in active status)Source
Good5 working days or lessKanban University
Warning6–10 working days
CriticalMore than 10 working days

An item that's been "in progress" for 2 weeks is almost certainly stuck.

What to do about aging items

  • Make them visible. Daily check: "Do we have anything in progress for more than 5 days?"
  • Ask "what's blocking this?" — not "when will this be done?"
  • Consider splitting. Can you ship part of it now?

Epic and Initiative Aging

RatingEpic AgeInitiative Age
Good6 weeks or less3 months or less
Warning6–12 weeks3–6 months
CriticalMore than 12 weeksMore than 6 months

An epic lasting 12+ weeks without progress usually indicates poor scoping, blocked dependencies, or shifting priorities.

7. Status Frequency

This metric is different from the others — it measures how your team uses the workflow, not how fast they deliver.

Status frequency counts how many times each status has been used across all work items throughout the project's lifetime. Visualised as a bar chart — one bar per status, height equals frequency — the shape tells you a lot about your project's health.

Four key insights

  1. Team transparency. If "In Progress" is used 200 times but "Waiting" only 5 times, it doesn't mean the team never waits. It means they don't report waiting. The current status snapshot might not reflect reality.
  2. Skipped steps. If "Ready" is rarely used, stories skip refinement — going straight from "Created" to "In Progress." This often indicates work pressure or unclear processes. It's a quality risk.
  3. Bottlenecks. If "Waiting" has a high frequency, items are frequently blocked. Even if most items are currently flowing, the pattern shows that blocking is a recurring problem.
  4. Healthy heartbeat. A project with evenly distributed status usage — items flowing through all stages, with "Done" and "Closed" frequencies approaching total item count — has a healthy heartbeat. A project where most items are stuck in "Open" or "In Progress" has a circulation problem.

This is not a snapshot. Status frequency is fundamentally different from looking at current status counts. Your board might show 5 items "In Progress" and look perfectly healthy. But if the frequency chart shows "Waiting" has been used 80 times, your project has a chronic blocking problem that happens to not be visible right now.

8. What NOT to Measure

Some metrics are popular, widely used, and actively harmful.

Story Points as a Performance Metric

Story points were designed as a planning tool — a rough estimate to help teams decide how much work fits in a sprint. They were never meant to measure performance or compare teams.

The moment you use story points to evaluate performance ("Team A delivered 40 points, Team B only 25"), teams start gaming the system. Points inflate. Easy work gets overestimated. The metric becomes meaningless.

Instead, use throughput (items completed per week). It's honest, ungameable, and tells you what you actually want to know.

Velocity as a Competition Metric

Comparing velocity between teams is meaningless. Different teams estimate differently, work on different problems, and have different definitions of "done." Comparing their velocities is like comparing distances driven by a truck and a bicycle.

Instead, track velocity variance within a team over time. Stable means predictable. That's what matters.

Hours Worked

Hours worked measures presence, not output. A developer who solves a complex problem in 3 focused hours delivers more than one who spends 10 distracted hours. If your organisation tracks hours, that's a payroll concern — not a project management metric.

Lines of Code

More code is not better code. Often, the best work removes code. Measuring lines of code rewards the wrong behaviour.

Number of Meetings

Fewer meetings is not automatically better, and more is not automatically worse. Count decisions made, not meetings held.

9. Putting It All Together

You don't need all ten metrics from day one. Start with three:

  1. Cycle Time — how fast you deliver
  2. WIP — how much you juggle
  3. Sprint Completion Rate — how well you plan (if you use sprints)

These three are cheap to measure, immediately actionable, and cover the most common problems. Add more as specific questions arise.

The pattern for reading any metric

Situation → Observation → Might point to → Suggestion

Example:

  • Situation: Cycle time (85th percentile) is 8 days.
  • Observation: Items take longer than the 3-day benchmark, and it's trending upward.
  • Might point to: High WIP, items too large, unresolved blockers.
  • Suggestion: Check current WIP count. If above 3 per person, institute a WIP limit.

When you don't have enough data

Don't draw conclusions from 3 completed items or 1 sprint.

MetricMinimum data needed
Cycle time10+ completed items
Sprint completion3+ completed sprints
Velocity variance5+ completed sprints
Backlog health, WIPAlways available (snapshots)

Projects without sprints

If you don't use sprints, that's fine. Flow metrics, backlog metrics, and aging metrics all work regardless of methodology. Sprint metrics simply don't apply — skip them.

10. Further Reading

The benchmarks and insights in this guide are grounded in published research. For deeper exploration:

Source Publication Focus
Daniel Vacanti Actionable Agile Metrics for Predictability (2015) Cycle time, aging WIP, flow metrics
Modig & Ahlstrom This is Lean (2012) Flow efficiency
Gerald Weinberg Quality Software Management (1992) Context-switching, WIP impact
Donald Reinertsen Principles of Product Development Flow (2009) Queuing theory, waiting time
Schwaber & Sutherland Scrum Guide (2020) Sprint planning, backlog management
Mike Cohn Mountain Goat Software Velocity, sprint completion
Roman Pichler Product Backlog Tips Backlog sizing
PMI Pulse of the Profession (2021) Scope creep, project success
Standish Group CHAOS Reports (2020) Cancellation rates, project outcomes
Digital.ai State of Agile Report Industry adoption patterns
Atlassian Agile Coach Backlog grooming, WIP
Forsgren, Humble, Kim Accelerate / DORA (2018) Lead time, team performance
Kanban University Kanban Guide Flow metrics, WIP limits
APA Multitasking: Switching Costs Context-switching research

This guide is part of Rocket Project — a project management tool. One of its core principles: measure only what matters.