Test design

tldr: Test design is the work of turning requirements into specific, executable test cases using techniques like equivalence partitioning, boundary value analysis, decision tables, and state transition diagrams. Skip it and you end up with tests that prove your code does what it does, not what it should do.


Why test design is a separate skill

Writing test code and designing tests are different skills. Most engineers can do the first. Doing the second well requires a discipline most engineers were never taught.

The symptom of skipped test design: a test suite with hundreds of cases, all of which pass, that misses the bug a customer found in 30 seconds. The tests prove the code does what the engineer wrote. They do not prove the code does what was needed.

Good test design starts from the requirement, not from the code.


Black-box vs white-box design

Two camps, both useful.

Black-box design. You know what the system should do but not how. Test cases come from requirements, contracts, and user behavior. Most acceptance and system tests are black-box.

White-box design. You know the implementation and design tests around its structure. Coverage targets specific branches, paths, and code blocks. Most unit tests are white-box.

Mature suites use both. Black-box catches "wrong thing built." White-box catches "thing built wrong."


Techniques that produce useful tests

Equivalence partitioning

Group inputs into classes that should behave the same. Test one representative per class. See equivalence partitioning for the full breakdown.

Boundary value analysis

Most bugs cluster at boundaries. Test exactly at the edge, just above, just below. For an input that accepts 1 to 100, test 0, 1, 2, 99, 100, 101.

Decision tables

For business rules with multiple conditions, build a table mapping condition combinations to expected outcomes. Each row becomes a test case. See decision table testing.

State transition testing

For stateful systems (workflows, sessions, finite state machines), enumerate states and transitions. Test each transition, including the ones that should be impossible.

Use case testing

Walk through user goals, end to end. Test each path the user could take. Often catches integration bugs that unit tests miss.

Pairwise testing

When parameters interact, testing all combinations explodes quickly. Pairwise testing covers every pair of parameter values, which catches most interaction bugs at a fraction of the cost.


A working design process

A simple sequence that scales:

  1. Read the requirement. Understand what is being asked, not just what is written.
  2. Identify the inputs and outputs. Inputs include user actions, system events, configuration, time. Outputs include UI, data changes, downstream effects.
  3. Apply techniques in order. Equivalence partitioning to find classes. Boundary value to pick values. Decision tables for combinations. State transitions for workflow.
  4. Add edge cases. Empty input. Maximum input. Concurrent input. Invalid characters. Unusual data types.
  5. Add adversarial cases. What if a user does this with malice? With confusion?
  6. Review with someone else. Test design improves when challenged.

Skipping the last step is the most common failure mode. Test design done in isolation reflects the designer's blind spots.


How AI testing reframes test design

Traditional test design produces written test cases that humans or automated frameworks execute.

AI testing platforms shift this. You describe the goal in plain language and the agent figures out the steps. Bug0 and Passmark generate test variations, vary inputs, and report failures. The design work shifts from "write the steps" to "frame the goal correctly."

This makes design faster but does not eliminate it. A poorly framed goal still produces bad tests, just produced faster.


Tools that support test design

  • TestRail, Zephyr, Xray. Test case management with traceability.
  • PICT, Hexawise. Pairwise test generation.
  • TestComplete, TOSCA. Model-based testing tools.
  • Pytest parametrize, JUnit @ParameterizedTest. Code-level test design helpers.

Tools do not replace skill. A team that designs poorly with TestRail will design poorly without it.


FAQs

How is test design different from test planning?

Test planning decides what to test and how. Test design decides which specific cases to write. Planning is strategic, design is tactical.

Should every requirement have a test case?

Every testable requirement, yes. Some requirements (like "the system should be reliable") need to be broken into testable sub-requirements first.

How many test cases per requirement?

It depends on complexity. Simple requirements need 2 to 5 cases. Complex ones (involving multiple states, conditions, or edge cases) might need 20 to 50.

What is the most overlooked test design technique?

State transition testing. Most teams do equivalence partitioning intuitively. Almost no one explicitly maps state transitions, which is where workflow bugs hide.

Can Bug0 generate test designs automatically?

Bug0 generates test variations from a stated goal, including boundary cases and edge inputs. Reviewing the generated tests and adding domain-specific cases still has high value.

Ship every deploy with confidence.

Bug0 gives you a dedicated AI QA engineer that tests every critical flow, on every PR, with zero test code to maintain. 200+ engineering teams already made the switch.

From $2,500/mo. Full coverage in 7 days.

Go on vacation. Bug0 never sleeps. - Your AI QA engineer runs 24/7

Go on vacation.
Bug0 never sleeps.

Your AI QA engineer runs 24/7 — on every commit, every deploy, every schedule. Full coverage while you're off the grid.