tldr: Defect tracking is the process of recording each bug from discovery to closure with enough detail that it cannot be lost. Capture the right fields, run a clear workflow, and pick a tool the team will actually use.
What gets tracked
Defect tracking has one job: every bug found in any environment is captured, prioritized, assigned, fixed, verified, and closed, with the history visible to anyone who needs it.
Without that, three problems compound.
- Bugs get lost between QA and engineering.
- Severity decisions get made twice, differently, by different people.
- Trends across releases stay invisible.
Required fields for a good bug ticket
A useful bug record has the following at minimum.
Title. Specific. "Login fails for SSO users on Firefox" beats "Login broken."
Description. What you observed. What you expected. The gap between the two.
Steps to reproduce. Numbered, specific, copy-pasteable where possible.
Environment. Browser, OS, version, account type, environment (prod/staging/local).
Severity. Impact: how bad is the bug?
Priority. Urgency: when do we fix it?
Attachments. Screenshot, video, network logs, or full reproduction trace.
Reporter and assignee.
Linked work. PR that fixed it, related tickets, customer reports.
Many teams skip severity and priority or merge them into one field. They are different signals. Severity describes the bug. Priority describes the fix order. A high-severity bug affecting one customer might have lower priority than a medium-severity bug affecting all customers.
A workflow that works
Most teams need this state machine and no more:
- New. Just filed. Awaiting triage.
- Triaged. Severity, priority, and assignee set.
- In Progress. Engineer is working on it.
- In Review. Fix is in PR.
- Verified. QA has confirmed the fix.
- Closed. Done.
Add Reopened if a verified bug returns. Avoid 12-state workflows that no one follows.
Tools the industry uses
The market is competitive but predictable.
- Jira. Most common. Heavyweight, customizable, and widely integrated.
- Linear. Faster, opinionated, popular with smaller engineering teams.
- GitHub Issues. Fine for open source and small projects. Limited reporting.
- Asana, Trello, ClickUp. Used by some teams, mostly when the same tool covers product management.
The tool matters less than the discipline. A team that fills out tickets carefully wins with any tool. A team that does not loses with all of them.
Severity levels that actually mean something
Three levels work for most teams. Five start producing arguments. Seven are theater.
- P0 / Critical. Production is broken for a meaningful share of users. Drop everything.
- P1 / High. Important feature is broken or degraded. Next sprint.
- P2 / Normal. Bug is real but does not block critical work.
If your team needs more granularity, add P3 (cosmetic) and stop there.
Linking defects to test cases
A bug found in testing should link back to the test case that found it. A bug found in production should link to the test case that should have caught it but did not.
The second link is more valuable than the first. It tells you exactly where your test coverage failed and what to add. Over time, the gap closes.
For modern AI testing, this happens automatically. Bug0 attaches the test that failed, the artifact, and the root-cause hint to every defect. Reproduction time drops from hours to seconds.
Defect metrics worth tracking
Most defect metrics are vanity. Three actually correlate with shipping fewer bugs.
Mean time to detection. From bug introduction to bug found. Lower is better. Improves with broader test coverage and continuous testing.
Mean time to fix. From triage to verified. Reflects engineering capacity and prioritization.
Escaped defect rate. Bugs found in production divided by bugs found total. The single number that matters most. See escaped defects.
Avoid raw bug counts. They reward filing bad tickets and punish thorough testing.
FAQs
Should every issue be a tracked defect?
No. A coding mistake caught and fixed in the same PR is not a defect. Track defects that escape to QA, staging, or production.
How long should bugs stay open?
Define an aging policy. P0 closes within hours. P1 within the sprint. P2 within the next two sprints or gets explicitly deferred. Anything older than 90 days probably will not get fixed.
Who closes a defect?
The reporter or the QA who verified the fix. Not the engineer who fixed it. This avoids the "fixed in my testing" closure that turns out wrong.
How does Bug0 affect defect tracking?
Bug0 creates rich defect records automatically when an automated test fails: title, steps, environment, screenshot, full trace. The team triages instead of filing.
What is the difference between a defect and a bug?
In strict usage, a bug is in code, a defect is in the deliverable. In daily usage, the terms are interchangeable. See defect vs bug for the deeper take.
