Glossary
By the Glue Team
Velocity is the amount of work a team completes in a sprint (typically measured in story points). It's a measure of team productivity and the primary metric for forecasting project delivery dates. Stable velocity enables reliable planning; unstable velocity indicates problems.
Velocity = story points completed per sprint
If a team completes 40 story points in a sprint, their velocity is 40. Velocity is measured after sprint closes—only completed work counts. Partial work doesn't count.
Velocity is team-specific. One team's velocity of 40 is high; another's is low. Velocity has no absolute meaning—it only matters as a trend (improving or declining) and for comparison over time.
Forecasting: Velocity is the primary tool for answering "When will this be done?" If an epic is 200 points and velocity is 40 points/sprint, it will take ~5 sprints. This is more credible than guessing.
Capacity Planning: "What can we accomplish this sprint?" Velocity answers this. If historical velocity is 40, commit 35-40 points. Commit 50 and you'll miss. This is how sustainable pace works.
Problem Detection: Declining velocity indicates problems. Increasing technical debt? Declining velocity shows it. More interruptions? Declining velocity shows it. Velocity is diagnostic.
Team Performance: Not performance judgment—velocity is not used to rate individuals. But velocity shows if a team is getting more or less productive. This might indicate process improvements or problems.
Iteration Effectiveness: Velocity shows if sprint length works. If velocity is stable, current sprint length is working. If wildly unstable, sprint length or planning might need adjustment.
Team Size: Larger teams complete more absolute work but have communication overhead. Velocity increases with team size but not linearly.
Team Experience: Experienced teams are faster. New team members pull velocity down. This is normal.
Codebase Quality: High technical debt slows velocity. Good code enables fast changes. Paying down debt can improve velocity.
Interruptions: Support requests, production incidents, unplanned meetings—all pull velocity down. Interrupt-heavy teams have lower velocity.
Estimation Accuracy: If estimates are consistently wrong, velocity becomes meaningless. Well-estimated work enables velocity tracking.
Sprint Length: Shorter sprints have more overhead relative to work. Longer sprints are more predictable. Most teams use 2-week sprints as balance.
Definition of Done: Velocity for "features built" is different from "features shipped." Consistent definition is crucial.
Stable Velocity: Completing 30-50 points per sprint consistently. This is healthy. It enables reliable forecasting.
Declining Velocity: Dropping from 40 to 35 to 30 points. Indicates problems. Technical debt accumulation? Team turnover? Increasing complexity? Investigate.
Spiking Velocity: 30, 30, 50, 35, 30. Indicates estimation inconsistency or something unusual happened. Investigate spikes.
Ramping Velocity: New team starting at 15 points, climbing to 40 over months. Normal as team gels and learns codebase.
Plateauing Velocity: Velocity climbs then flattens. Team has reached capacity. To increase velocity, improve efficiency or add team members.
Simple calculation:
Work remaining / Average velocity = Sprints needed
200 points remaining / 40 points/sprint = 5 sprints needed
But add buffers:
This is how product teams forecast features without detailed task breakdowns.
Pay Down Technical Debt: High debt slows teams. Improving code health improves velocity over time.
Reduce Interruptions: Protected time for focused work increases velocity. Shield team from constant context switches.
Better Estimation: If estimates are wrong, velocity is meaningless. Calibrate estimates.
Improve Processes: If sprint planning is chaotic, velocity suffers. Clear process improves consistency.
Right Sizing Sprint: If sprint length doesn't fit team, change it. Some teams do one-week sprints, others do three.
Team Stability: Constant turnover destabilizes velocity. Stable team = more predictable velocity.
Clear Definition of Done: Ambiguity about what "done" means causes velocity inconsistency. Explicit definition helps.
Note: Velocity shouldn't be a target to hit or exceed. Teams that are pressured to "increase velocity" game the system (overestimate, reduce quality). Velocity should be natural output of sustainable pace.
"Higher velocity means better team." False. Velocity is contextual. A 20-person team with velocity 50 might be productive. A 5-person team with velocity 50 is probably over-committed and unsustainable.
"We should increase velocity every sprint." False. Velocity should stabilize. Sustainable pace is goal, not maximum possible pace.
"Velocity is about individual productivity." False. Velocity is team metric. It reflects team productivity, codebase quality, process, interruptions—many factors.
"Velocity can be compared across teams." False. Different teams have different story point scales, different definitions of done, different work complexity. Velocity is only meaningful within a team.
"Velocity must be accurate for forecasting." Partly false. Velocity doesn't need to be precise—10% variance is fine. Trends matter more than absolute numbers.
Sprint Estimation: Estimating work for upcoming sprint. Uses historical velocity to determine realistic commitment.
Burndown Chart: Visual showing work remaining in sprint. Velocity is input to burndown expectations.
Capacity Planning: Using velocity to plan what fits in sprint.
Agile Metrics: Velocity is primary Agile metric. Other metrics include cycle time, cumulative flow.
Q: How many sprints of data do you need to have reliable velocity? A: 3-5 sprints gives initial estimate. But velocity stabilizes more over 10-12 sprints. Use conservative estimates early, get more aggressive after 6+ months.
Q: Should you average velocity over all time or recent sprints? A: Weighted average favoring recent sprints. Last four sprints matter most. This accounts for team evolution while acknowledging trends.
Q: What if velocity is highly variable? A: This indicates problems. Inconsistent estimation? Unpredictable interruptions? Varying sprint scope? Investigate. Variable velocity makes forecasting unreliable.
Q: Can you compare velocity across teams? A: No. Different teams, different scales. Don't compare "Team A velocity 40" to "Team B velocity 30." Each team's velocity is meaningful within context.
Keep reading