tldr: User Acceptance Testing (UAT) is where real users (or their representatives) verify that the build solves their problem. Done well, it catches misalignment between requirements and reality. Done poorly, it becomes a final-stage formality that ships the wrong product anyway.
What UAT is for
UAT answers the question: did we build the right thing?
System testing answered the earlier question: did we build the thing right? Both questions matter, but they need different testers and different success criteria.
The classic UAT failure mode is letting QA run it. QA cares whether the system works. UAT needs to test whether the system solves a real user problem. These are different goals, and they need different perspectives.
Types of UAT
Four types, each with a different audience and a different question.
Alpha UAT
Internal stakeholders test the build before customer exposure. Catches the obvious mismatches between spec and result.
Beta UAT
Real customers test the build before general release. Provides feedback on actual usage patterns. Output: feature feedback, prioritized bug list, and (usually) some scope cut.
Contract acceptance testing
For B2B builds, the customer formally accepts the build per contract. Pass means signed off; fail means the contract is not yet fulfilled.
Operational acceptance testing
Verifies the build can be operated in production by the operations team: monitoring, deployment, rollback, runbooks, support tooling.
Most products need at least alpha and beta UAT. Regulated and B2B products usually need contract and operational variants too.
Who should run UAT
Not QA. Repeat: not QA.
UAT participants by build type:
- Internal SaaS: end users from the team that requested the feature.
- B2B: the customer's business analysts or decision-makers.
- B2C: real customers, recruited via beta program or research panel.
- Operational UAT: support, ops, SRE, or whoever runs the system.
QA's role in UAT is to support, not execute. They prepare the environment, write UAT test scripts, train participants, and capture findings.
A working UAT process
Five steps that consistently produce useful UAT outcomes.
1. Define acceptance criteria upfront
The criteria belong in the user story, written before development starts. Use Given/When/Then or equivalent specific, testable language. See acceptance testing for the broader pattern.
2. Prepare a UAT environment
Stable, populated with realistic data, isolated from production. Participants should not be debugging environment issues during their session.
3. Train participants
Give a 30-minute walkthrough of what the system does. Provide written reference. Make it clear which scenarios are in-scope.
4. Run sessions
Time-boxed (2 to 4 hours each). Participants execute scripted scenarios and explore. Capture every issue, even cosmetic ones.
5. Triage and resolve
Sort findings into blockers, fixes-before-launch, and post-launch backlog. Get explicit sign-off from the right authority. Document scope cuts publicly.
What participants struggle with
Three patterns recur across teams.
Vague test scripts. Participants do not know what they are supposed to verify. Solution: scripts written in business language, not technical language. See UAT test scripts.
Environment instability. Tests fail because the test data was wrong, not because the build was wrong. Solution: invest in test bed stability before UAT starts.
Scope creep. Participants find unrelated issues and the UAT cycle balloons. Solution: clear scope in writing, separate backlog for out-of-scope findings.
Sign-off criteria
Vague sign-off ("looks good") produces post-launch arguments. Specific sign-off prevents them.
A useful pattern:
- 100% of P0 acceptance criteria pass.
- 95% of P1 criteria pass.
- All P0 and P1 defects fixed and re-verified.
- Sign-off captured in writing by the named approver.
Anything less leaves room for interpretation.
Automating UAT
Most UAT involves human judgment and cannot be fully automated. The deterministic parts can.
Patterns that work:
- Automated scripts handle the data setup before each session.
- Automated regression runs continuously so UAT participants are not finding regression bugs.
- AI testing tools like Bug0 verify the deterministic flows continuously, freeing UAT participants to focus on judgment-heavy testing.
This shifts UAT from "did this even build?" to "does this actually solve the problem?" The latter is the question UAT was meant to answer.
FAQs
How is UAT different from QA testing?
QA verifies the system works as built. UAT verifies the system solves the user's problem. Same build, different criteria.
Can UAT find new requirements?
Often, yes. UAT regularly surfaces requirements no one captured. The right move is to acknowledge them as scope changes, not to silently expand the build.
How long should UAT take?
Depends on scope. A simple feature: 1 to 2 days. A major release: 1 to 3 weeks. Anything longer usually means UAT started too late or scope was too large.
What if UAT finds blocking issues at the deadline?
The deadline moves, the scope moves, or you ship known issues with documentation. There is no fourth option.
How does Bug0 support UAT?
Bug0 runs continuous regression as a done-for-you QA service, so UAT participants do not waste time on bugs the automated suite should have caught. Their attention goes to judgment-heavy verification, which is where UAT earns its cost.
