Jira Can Track Work. It Can't Verify the Problem Is Solved.
This is a hard truth about work tracking tools: they track status, not resolution. A ticket moving to "Done" means an engineer marked it done. It does not mean the bug is fixed, the debt is addressed, the feature is working, or the root cause is resolved.
This creates a massive category of waste called ghost work: tickets closed without actually solving the problem.
The Status vs. Resolution Problem
Jira, Linear, GitHub Issues - they all work the same way. A ticket has fields: status, assignee, maybe some custom fields. Work happens. Someone moves the ticket to "Done" or "Closed" or "Resolved."
That's when the tracking stops. There's no verification step. No system checks whether the underlying problem actually changed. Just a status change.
Here's what actually happens in practice:
- A bug ticket is filed: "Login fails for users with special characters in their email."
- An engineer investigates, adds a fix to the validation logic.
- Tests pass. The engineer marks the ticket done.
- What the engineer didn't know: the special character validation was already being done in three other places in the codebase. The fix worked in one place. The other three places still allow the special characters and still cause login failures.
- The ticket stays closed because the status said it was done.
- Three sprints later, a user with special characters in their email hits one of the other places and a new bug ticket is created.
Or consider technical debt:
- A ticket is filed: "UserService module is too complex and hard to test."
- A developer spends a day refactoring one method to be cleaner.
- The method is cleaner. Tests are still hard to write. The overall module complexity hasn't changed.
- The ticket is marked done. The underlying debt still exists.
- When the next person tries to add a feature, they discover the module is still complex.
Or features:
- A feature ticket: "Add dark mode to user settings."
- An engineer implements dark mode, ships it, marks it done.
- The feature goes live. 2% of users enable it. No one uses it.
- The ticket stays closed. The feature exists but nobody uses it.
- Engineering time was spent on something that delivers no value.
None of these problems are visible because the ticket status can't see the actual codebase state.
Why This Matters
Ghost work compounds. Every closed ticket that didn't actually solve the problem becomes a hidden liability. It creates false confidence that the problem is handled. It makes recurring issues harder to track - is this a new bug or the same bug that was supposedly fixed? It wastes investigation time because engineers assume the previous ticket was actually resolved.
For technical debt, ghost work is especially expensive. You "address" complexity by refactoring one piece, close the ticket, and the complexity hasn't actually improved. The module is still hard to change. But the organization thinks the problem is solved so it doesn't get refactored again until it's much worse.
The result: over time, work tracking systems become less useful. Closed tickets don't mean solved problems. People stop trusting the system. Tickets get reopened constantly. Engineers stop writing good descriptions because they know the status won't reflect reality anyway.
What Real Resolution Looks Like
Resolution is different from status change. It's a codebase state change that can be verified automatically.
When a bug is fixed, resolution means:
- The underlying issue no longer exists in the code.
- There's a test that would fail if the issue reappeared.
- The same error pattern won't appear in similar code paths.
You can verify this: run the test, does it pass? Check the error signature, does it appear in production logs? Check for parallel implementations, are they vulnerable to the same bug?
When technical debt is addressed, resolution means:
- The measured complexity or coupling has actually decreased.
- New code in that module follows the improved pattern.
- Test coverage in that module has improved.
You can verify this: measure the complexity, has it dropped? Check recent commits, are they improving the module or worsening it?
When a feature is shipped, resolution means:
- The feature is live and users are actually using it.
- The feature is generating value (reduced support load, increased engagement, enabled new workflows).
- The code supports the feature is maintainable.
You can verify this: check adoption metrics, is anyone using it? Check incidents related to the feature, are there issues? Check test coverage for the feature, is it adequate?
How Verification Changes Work
With verification, the workflow changes:
-
Problem is identified. Ticket is created with a description of the problem.
-
Verification target is defined. Before work starts, the team agrees: how will we know this is actually fixed? If it's a bug, we need a test. If it's debt, we need a complexity measurement. If it's a feature, we need an adoption metric.
-
Work happens. Code is written, tests are added, deployment happens.
-
Verification runs. Automatically, systems check: does the test pass? Did the complexity drop? Is the feature being used? Is this metric improving?
-
Only then is the ticket closed. Status reflects reality.
This requires thinking before work starts. "We're going to fix this bug by adding a test that would fail without the fix" is different from "we're going to fix this bug." The second is vague. The first is specific and verifiable.
What This Requires
Three things have to be true for verification to work:
1. Measurable verification targets. "Fix this bug" isn't measurable. "This error no longer appears in production logs and we have a test covering this case" is measurable. Every ticket needs a verification target before work starts.
2. Automated measurement. The verification can't be manual. It has to be something a system can check: does a test pass? Did a metric drop? Did a codebase pattern change? This requires instrumentation and automation.
3. Closure tied to verification. The ticket can't be manually closed. It's closed when verification passes. This requires the work tracking system to be connected to the measurements.
Most teams don't have this infrastructure. Verification is manual and honored sporadically. "Did you test this?" "Yeah, it looks good." That's closure without verification.
The Ghost Work Alternative
Without verification, tickets become theater. They create the appearance of progress without guaranteeing actual progress. Teams that live with ghost work develop workarounds:
- Senior engineers stop believing tickets are done, so they re-check everything.
- Tickets get reopened constantly because the problem resurfaces.
- Post-mortems become "why did we close this ticket without fixing it?"
- Debt accumulates because "addressing" it doesn't actually improve anything.
You can see this in teams that have been running on ghost work for years: they have elaborate manual verification processes because they've learned the hard way that status changes lie.
The Alternative: Verification at Close
Teams that have invested in verification close tickets less often, but when they do, the problem is actually solved. They redeploy a refactored module and the complexity metric drops. They fix a bug and the related test passes and the production error rate for that signature drops to zero. They ship a feature and adoption metrics show it's being used.
Tickets stay closed because the underlying problem actually changed.
This takes more work upfront. Defining verification targets requires thought. Setting up automated measurement requires infrastructure. But the payoff is massive: your issue tracking system actually reflects reality. Recurring problems disappear because you don't close tickets without confirming the problem is solved.
Frequently Asked Questions
Q: Every ticket has different verification criteria. Doesn't this create a lot of complexity?
A: There are patterns. Most bugs need tests. Most debt reduction needs metrics. Most features need adoption data. Standard templates for verification targets can help. The complexity comes from vague tickets that nobody really understands anyway.
Q: What do we do about tickets that can't be measured?
A: If a ticket can't be measured, it's usually poorly defined. "Improve code clarity" can't be measured. "Reduce cyclomatic complexity in the UserService from 18 to 10" can be. Ask what the ticket is actually trying to achieve and define a measurement for it. If you still can't find a measurement, the ticket probably shouldn't exist.
Q: This requires connecting work tracking to codebase intelligence. Our tools don't support this.
A: Some teams have built this infrastructure. Others use tools specifically designed to bridge the gap. Either way, it's worth the investment if ghost work is costing you. Even a partial solution - automated verification for certain ticket types - pays for itself quickly.
Related Reading
- What Is Codebase Intelligence?
- What Is Code Intelligence?
- The Product Manager's Guide to Understanding Your Codebase
- The CTO's Guide to Product Visibility
- How to Do Competitive Analysis When You Don't Know Your Own Product