tldr: In Scrum, testing happens every sprint, not at the end. QA fits into the sprint by writing acceptance tests during development, contributing to definition of done, and running continuous regression. Skipping any of these creates the testing-at-the-end trap.
Where QA fits in a sprint
A two-week Scrum sprint typically has:
Sprint planning. QA participates. Acceptance criteria are written before development starts. Tests are designed against those criteria.
During the sprint. QA tests stories as they become testable. Often this means QA-engineer pairing on specific stories.
Sprint review. QA verifies stories are demoable. Failed acceptance means the story is not done.
Sprint retrospective. QA contributes feedback on the testing process.
The mistake most teams make is treating QA as a step at the end of the sprint. By the time stories reach QA, the engineer has moved on, and bugs found late slip to the next sprint.
Definition of done
In a healthy Scrum team, "done" includes testing. A story is not done until:
- Code is written.
- Code is reviewed and merged.
- Acceptance criteria pass.
- Regression suite still passes.
- No new P0 or P1 defects introduced.
If "done" only means "code merged," QA always runs late. If "done" includes testing, the team plans capacity accordingly.
Continuous regression
Sprint-based testing works only if regression is continuous, not per-sprint.
Pattern:
- Unit and integration tests run on every commit.
- Smoke E2E runs on every PR.
- Full regression runs on every merge to main.
- AI testing platforms like Bug0 make this affordable for E2E coverage.
Without continuous regression, every sprint accumulates regression debt. By sprint 3, QA spends more time on regression than on the new sprint's stories.
Story-level vs release-level testing
Story-level testing happens during the sprint. Acceptance criteria pass for individual stories.
Release-level testing happens before deploying. Smoke regression on the full release, plus any release-specific concerns.
Both matter. Sprint-only testing misses cross-story integration issues. Release-only testing misses opportunities to catch defects early.
What "testable" means in a sprint
Three properties.
Independent. Can the story be tested without depending on another story finishing?
Bounded. Is the scope clear enough that you can write specific test cases?
Verifiable. Can the acceptance criteria be checked objectively?
Stories failing any of these get pushed back during sprint planning. QA's job is to insist on testability before the sprint commits.
Common Scrum testing failures
QA at the end. Already covered. Stop doing this.
Acceptance criteria written by engineering alone. Engineering writes criteria they know they can pass. QA writes criteria that match user behavior. Both perspectives matter.
No regression budget. Every sprint adds tests. The test suite gets slower. Eventually it stops running. Allocate capacity for test maintenance.
Stories carried over for testing only. A story spilled to the next sprint because "QA needs more time" is a planning failure. Either the story was too big, or QA was not engaged early enough.
How AI testing fits
The biggest enabler of Scrum testing is making continuous regression affordable. As a forward-deployed QA team, Bug0 runs E2E regression on every PR with low maintenance, which removes the testing-at-the-end pressure.
FAQs
Can a sprint deliver tested code?
Yes, when QA is engaged from sprint planning and continuous regression is in place. Without those, sprints deliver code-tested-after-the-fact.
Should QA write tests during the sprint?
Yes. Tests written during the sprint are part of the work. Tests written next sprint are technical debt.
How do you handle bugs found in the sprint?
P0/P1 bugs get fixed in-sprint. P2/P3 bugs get added to the backlog and prioritized in sprint planning.
How does Bug0 fit Scrum?
Bug0 runs regression on every PR, which is what makes Scrum testing actually work. QA capacity goes to acceptance, not regression.
