By Vaibhav Verma
The Standish Group has been tracking software project outcomes since 1994. Their latest data says 66% of technology projects end in partial or total failure. McKinsey found that large IT projects run 45% over budget and 7% over time, while delivering 56% less value than predicted. The Project Management Institute estimates $109 million wasted for every $1 billion invested.
These numbers have been roughly the same for thirty years. Three decades of better tools, better frameworks, better methodologies, better talent - and the failure rate has barely moved.
The usual explanations are: poor planning, scope creep, bad communication, unrealistic timelines. Those are real. But they're symptoms, not root causes. They're what you see when you autopsy a failed project. They don't explain why smart teams keep making the same mistakes.
I've spent a decade building and managing engineering teams, and I think the root cause is simpler and more uncomfortable than any of those: the people making product decisions cannot see the system they're deciding about.
The Visibility Crisis
Software projects fail because of an information asymmetry at the center of every product organization.
Engineers understand the system. They know the architecture, the constraints, the fragile parts, the assumptions that everything rests on. But they communicate this knowledge poorly, partially, and only when asked. Not because they're bad communicators - because the knowledge is too complex to transfer through conversation.
Product managers, CTOs, and executives make decisions about the system. They decide what to build, when to build it, and how much to invest. But they make these decisions with a fraction of the relevant information, filtered through engineering explanations that are necessarily incomplete.
This gap produces every symptom on the standard "why projects fail" list. Unrealistic timelines? That's what happens when the person setting the timeline can't see the architectural complexity. Scope creep? That's what happens when requirements are defined by people who don't understand the constraints they're designing within. Poor communication? That's a euphemism for "product and engineering are operating on different maps of reality."
A 2024 BCG study on IT project failures found that the primary cause was "lack of alignment between the technology and business sides of the organization about operational objectives." They framed it as an alignment problem. I'd frame it differently: you can't align on something you can't see.
The $109 Million Question
That PMI statistic - $109 million wasted per $1 billion invested - breaks down into a few categories that all trace back to the visibility gap.
Rework from misunderstood requirements. The requirements were clear in product terms ("add multi-tenant support") but the product team didn't understand the architectural implications (data isolation, permission models, query patterns, migration paths). Engineering discovers the real complexity mid-build. Scope expands. Timeline slips. Budget overruns.
For a 40-person engineering org at a Series B company I worked with, misunderstood requirements accounted for roughly 22% of total engineering time over six months. Not because the PM was vague - her specs were detailed. Because she was specifying features against a system she couldn't see, so her assumptions about complexity were systematically wrong.
Estimation failures. When the people estimating work can't see the codebase, estimation becomes guessing dressed up as planning. A Geneca survey found that 75% of respondents said their projects felt "doomed from the start." Not because the projects were impossible, but because the estimates were fiction.
I've watched this pattern at three companies. PM asks "how long will this take?" Engineer says six weeks. It takes fourteen. Not because the engineer is bad at estimating - because the scope they estimated against was incomplete, and the PM didn't know enough about the system to catch the gap.
Recovery from preventable incidents. When nobody understands how the system works, incidents that should take fifteen minutes to resolve take four hours. A 2024 PagerDuty report found that mean time to resolution increases 77% when the responding engineer hasn't worked on the affected service before. For a company processing 10,000 transactions per hour, a three-hour-longer outage isn't just an engineering problem. It's a revenue problem.
Knowledge loss from attrition. When critical knowledge lives in one person's head and that person leaves, the cost isn't just the $200K recruiting fee to replace them. It's the six to twelve months of degraded velocity while the team reverse-engineers systems that the departed engineer understood intuitively. For teams with 15-20% annual attrition (typical for tech), this is a continuous, compounding tax.
Why Traditional Fixes Don't Work
The standard playbook for reducing project failure includes: better project management, Agile adoption, more planning, better documentation. We've been trying all of these for decades. The failure rate hasn't changed.
Agile didn't solve it. Agile shortened feedback loops, which genuinely helps. But it didn't close the visibility gap. A PM who can't see the codebase in a waterfall process still can't see it in two-week sprints. They just discover they can't see it more frequently.
Documentation didn't solve it. Documentation is a manual process trying to keep pace with an automated one. Code changes through dozens of PRs per day. Documentation updates when someone remembers, which is rarely. A 2023 survey by Swimm found that 70% of internal documentation is outdated within three months. Outdated documentation is worse than no documentation because it creates false confidence.
Better project management didn't solve it. You can't manage what you can't see. A project manager tracking velocity, burn-down charts, and sprint commitments is measuring outputs. They're not measuring whether those outputs are coherent with the underlying system. A project can be on schedule and on budget while systematically building on wrong assumptions about the architecture.
The pattern is always the same: we try to fix a visibility problem with a process solution. Process can't substitute for information. If the people making decisions can't see the system, no amount of methodology will make their decisions right.
The 70% Requirements Problem
The often-cited statistic is that 70% of project failures trace to requirements issues. This is usually interpreted as "product teams need to write better requirements." I think that interpretation is exactly backwards.
The requirements are bad because the people writing them can't see the system they're writing requirements for.
When a PM writes "add real-time collaboration," they're specifying a user-facing behavior. That's their job. But the quality of that specification depends entirely on whether they understand the system it needs to fit into. Does the current architecture support WebSocket connections? Is the data model designed for concurrent edits? Is there an event system that can handle real-time updates?
If the PM knows the answers, the requirement becomes: "add real-time collaboration, which will require extending the event system and updating the data model to support concurrent writes." That's a requirement an engineer can estimate accurately.
If the PM doesn't know the answers, the requirement stays at: "add real-time collaboration." And the engineer estimates against their best guess of what that means, which is inevitably incomplete.
The 70% isn't a requirements problem. It's a visibility problem wearing a requirements costume.
What Actually Works
After watching this pattern repeat at every company I've worked at - and after building a company specifically to address it - I've come to believe the solution has three parts.
Make the codebase legible to non-engineers. Not through documentation that gets stale or explanations that get filtered. Through systems that read the code directly and present its structure, constraints, and state in language that product leaders can act on. This is why I built Glue - to give PMs and engineering leaders the ability to query their codebase the way they query their analytics dashboard. "What services does checkout depend on?" "What changed in the last sprint?" "Where is the complexity concentrated?" These questions should be answerable without interrupting an engineer.
Embed architectural awareness in product decisions. Every feature request should include an architectural impact assessment. Not from the PM (they shouldn't need to write one) and not from a committee (too slow). From a system that automatically maps feature proposals to the parts of the codebase they'll touch. When a PM can see that "add webhook support" touches four services and requires a new event pipeline, the conversation about timeline becomes honest from the start.
Distribute system knowledge continuously. The bus factor problem - critical knowledge concentrated in one or two people - is the number one amplifier of project failure. When the person who understands the payment system leaves, every estimate for payment-related features becomes fiction. Distributing knowledge doesn't mean documentation drives. It means making it possible for anyone in the organization to understand any part of the system at any time, without requiring the knowledge holder to stop what they're doing and explain.
The Uncomfortable Truth
Thirty years of data tell us that software projects fail at roughly the same rate regardless of methodology, talent, or tooling. The variable that hasn't changed in thirty years is the visibility gap: the people deciding what to build cannot see the thing they're building on.
Every other industry has solved this. An architect can see the building plans. A factory manager can see the production line. A pilot can see the instrument panel. Software product leaders manage the most complex systems in their organization - and they do it blind.
That's not a people problem. It's not a process problem. It's a tooling problem. And it's solvable.
The question is whether your organization will solve it deliberately, or keep paying the $109 million tax and calling it inevitable.
Frequently Asked Questions
Q: Why do so many software projects fail?
The root cause is an information asymmetry: the people making product decisions (PMs, CTOs, executives) cannot see the system they're deciding about. This produces unrealistic timelines, misunderstood requirements, estimation failures, and preventable incidents. Better processes help at the margin, but don't close the core visibility gap.
Q: What percentage of software projects fail?
The Standish Group reports that 66% of technology projects end in partial or total failure. McKinsey found that large IT projects run 45% over budget while delivering 56% less value than predicted. The PMI estimates $109 million wasted per $1 billion invested.
Q: What is the main reason for software failure?
Most analyses point to requirements issues, which account for roughly 70% of project failures. But requirements fail because they're written by people who can't see the system they're specifying against. The deeper cause is a visibility gap between product decision-makers and the codebase.
Q: What are the top 3 reasons why projects fail?
Misunderstood requirements (specifying features without understanding architectural constraints), estimation failures (guessing complexity without system visibility), and knowledge concentration (critical understanding living in one or two people who eventually leave). All three trace to the same root cause: insufficient visibility into the system being built.