By Priya Shankar
You commit to shipping a feature in 4 weeks. Two weeks in, you realize it's going to take 8 weeks. Now you're in crisis mode. You apologize to stakeholders. You re-prioritize. You're behind.
This happens constantly. Not because your team is bad. Because estimation is hard and most teams do it badly.
Why Estimation Fails
Problem 1: Unknown Unknowns
You estimate: "3 weeks." But you don't know:
Will the third-party API be cooperative? Will edge cases emerge? Will the database migration be simple or complex? Will what you think the scope is match actual scope?
Unknown unknowns destroy estimates. They're the reason estimates are always wrong.
Problem 2: Anchoring Bias
The PM says "I need this in 2 weeks." That number anchors the conversation. Engineers think: "That's not possible, but maybe 4 weeks." The "2 weeks" anchor biases them low.
Meanwhile, the PM assumes "4 weeks" is a promise. They're already planning for a Q3 launch.
Problem 3: Optimism Bias
Engineers estimate optimistically: "Auth should take 1 week." Reality: 2 weeks. They discover the existing auth is messier than expected. They hit edge cases. Optimism bias affects everyone. Reality almost always takes longer.
Problem 4: Context Switching
You estimate 40 hours for a 5-person team (200 hours). But nobody works at 100% focus. People pair program (2x hours for 1x output). People review code. People get interrupted. Effective hours are maybe 60% of estimated.
Problem 5: Scope Creep
You estimate: "Ship basic version in 2 weeks." Then stakeholders want: "Oh, can we also support X?" "What about Y?" Scope creeps. The estimate was for the original scope. Now you're delivering more.
How Teams Estimate Wrong
Wrong Approach 1: Gut Feel
"How long will this take?" "About 2 weeks." No data. No thinking. Just a guess.
Guesses are wrong. Especially for complex work.
Wrong Approach 2: Optimistic Individual Estimates
Each engineer estimates their part: Frontend: 3 days. Backend: 4 days. Database: 1 day. Testing: 1 day. Total: 9 days.
But integration takes 2 days. Unexpected edge cases take 3 days. You're over by 40%. This happens every time.
Wrong Approach 3: Reverse Estimation
"We need this shipped by Friday. How much can we do?" You work backwards from the deadline. You estimate scope to fit the timeline, not the other way around.
This leads to cutting scope (incomplete features) or burning out the team.
Wrong Approach 4: Assuming Constant Velocity
"Last sprint we did 40 points. This sprint we'll do 40 points." But last sprint was simple features. This sprint has a complex architectural change.
Velocity varies based on complexity. Assuming constant velocity is wrong.
How to Estimate Better
1. Break Work Into Smaller Pieces
Instead of: "Refactor the payment system: 6 weeks." Break it down:
Extract payment logic into a separate service: 2 weeks. Move test cases: 1 week. Integration testing: 1 week. Migration and rollback planning: 2 weeks.
Now you have weekly milestones. Less uncertainty. Better estimation.
2. Use Historical Data
Track what actually happens: Feature A (estimated 3 days): took 4 days. Feature B (estimated 5 days): took 5 days. Feature C (estimated 1 week): took 2 weeks.
Over time, you see patterns. Complex features take 1.5x longer than estimated. Integration takes longer than coding.
Use this data to improve estimates. Stop guessing.
3. Estimate Range, Not Point
Instead of: "2 weeks." Estimate: "Most likely 2 weeks, could be 1 week if lucky, could be 3 weeks if we hit edge cases."
This is more honest. You're communicating uncertainty to stakeholders. They understand the range.
4. Account for Non-Feature Work
Don't just estimate features. Estimate: Code review (typically 20-30% of time). Testing (typically 30-40% of time). Debugging (typically 10-20% of time). Infrastructure / setup (typically 10-20% of time).
Total estimate should include all of this, not just "code writing time."
5. Identify Risk and Add Buffer
When estimating, ask: "What could go wrong?" Unknown API behavior. Hidden complexity in existing code. Database migration issues. Missing requirements.
For each risk, add 20-50% buffer to the estimate.
6. Separate Estimate From Commit
Estimate independently of deadline: Engineer: "This is estimated to take 3 weeks." PM: "But I need it in 2 weeks." Discussion: "What can we cut? What's the minimum viable version?"
Estimate the work. Negotiate scope. Don't negotiate estimates.
Red Flags in Estimation
-
"Just 3 more days." repeated weekly. Your estimate was wrong. Use this data.
-
Actual time is consistently 1.5-2x estimate. Your team under-estimates. Calibrate.
-
Stakeholders are surprised when you miss. You're not communicating uncertainty. Communicate range.
-
Engineers are burned out at sprint end. You're overcommitting. Reduce commitment.
-
Complex work is estimated the same as simple work. Break work down more. Complexity varies.
Communicating Estimates
To Product
"This feature is estimated at 3-4 weeks, most likely 3.5. The primary risk is the payment API integration, which could add 1 week. The minimum viable version could be delivered in 2 weeks if we cut X and Y."
Now they understand: the estimate is 3-4 weeks, not a promise of 3 weeks.
To Stakeholders
"We can deliver the basic feature in 4 weeks. Additional refinement would take 2 more weeks. If you need it by Q2, we should start reducing scope now."
Give them options. Let them choose.
Within the Team
"We estimated 2 weeks. We're 1 week in. We're on track, but we've identified a risk around the database migration. If it hits us, we'll need another 2-3 days."
Keep people updated as new information emerges.
The ROI of Better Estimation
When estimates improve: Stakeholders trust the team ("You said 3 weeks, you delivered in 3 weeks"). Planning becomes predictable (roadmap is more accurate). Velocity stabilizes (you know what you can commit to). Morale improves (no constant overcommitment). Hiring decisions improve (you know actual capacity).
Better estimation enables everything else.
Frequently Asked Questions
Should we use story points or actual time? Actual time is more useful. Story points are relative and abstract. Time is concrete. Use time.
What if we don't have historical data? Start collecting it. For the next month, track what you estimate vs. what actually happens. You'll have enough data in a few weeks to calibrate.
How do we handle estimates when the codebase is messy? Add 30-50% buffer for "existing code complexity." As the codebase improves, the buffer decreases. This incentivizes paying down technical debt.