Requirements-based testing

tldr: Requirements-based testing maps every requirement to one or more test cases. The output is a traceability matrix that proves what is tested, what is not, and where coverage gaps exist. Essential for regulated software, valuable for everyone else.


What it solves

The most common QA failure: tests cover what was easy to test, not what was important. Months later, a customer finds a bug in a requirement no one tested.

Requirements-based testing prevents this by making the mapping explicit. Each requirement gets a test. Each test traces back to a requirement.


The traceability matrix

A simple table. Rows are requirements. Columns are test cases. A cell is filled if that test covers that requirement.

RequirementTC-001TC-002TC-003TC-004
REQ-1: User can log inYes
REQ-2: Password reset worksYes
REQ-3: Account locks after 5 failuresYesYes

Empty rows are uncovered requirements. They need tests. Empty columns are tests that do not trace to a requirement; they may not be useful.

In practice, the matrix lives in a test management tool (TestRail, Zephyr, Xray) or a spreadsheet for smaller teams.


What "requirement" means here

The bar should be testable. "The system should be reliable" is not a requirement; it is an aspiration. "The login endpoint should respond within 500ms at the 95th percentile under 100 concurrent requests" is a requirement.

If you cannot write a test for it, you cannot do requirements-based testing on it. The forcing function is good: vague requirements get rewritten before testing starts.


When this matters

Regulated software. Aerospace (DO-178C), medical devices (IEC 62304), automotive (ISO 26262). Auditors require the matrix.

Contractual deliveries. Customers contracting custom development often require requirements traceability as a deliverable.

Any team that wants honest coverage metrics. Test coverage metrics measured against requirements rather than code lines is a more meaningful number.


How to maintain it

Three habits keep the matrix current.

New requirement, new test. Every requirement enters with a planned test. No exceptions.

Test deletion checks the matrix. Removing a test should ask: which requirement loses coverage?

Sprint or release reviews include matrix review. Catch drift before it becomes a coverage gap.


How AI testing fits

The matrix says what to test. AI testing platforms make implementing the tests cheaper. Bug0 generates tests from goals stated in plain language, which maps naturally to requirements: each requirement becomes a goal.


FAQs

Is this only for waterfall projects?

No. Agile teams use lightweight versions: requirements live in user stories, tests link via tickets. Same idea, less ceremony.

How granular should requirements be?

Granular enough that one to three tests cover each. Too coarse and the matrix is useless. Too fine and the matrix becomes unwieldy.

Who maintains the matrix?

QA owns it, with input from product and engineering.

How does Bug0 support traceability?

Bug0 tags each test with the requirement it covers. The matrix becomes a query on the test management view.

Ship every deploy with confidence.

Bug0 gives you a dedicated AI QA engineer that tests every critical flow, on every PR, with zero test code to maintain. 200+ engineering teams already made the switch.

From $2,500/mo. Full coverage in 7 days.

Go on vacation. Bug0 never sleeps. - Your AI QA engineer runs 24/7

Go on vacation.
Bug0 never sleeps.

Your AI QA engineer runs 24/7 — on every commit, every deploy, every schedule. Full coverage while you're off the grid.