UAT test scripts

tldr: A UAT test script is a step-by-step guide written in business language that lets a non-technical user verify a flow against acceptance criteria. The structure matters: clear preconditions, simple steps, observable outcomes, and a place to record results.


Why UAT scripts are different from QA scripts

QA scripts are written for engineers and testers. UAT scripts are written for end users.

The difference is bigger than it sounds. Engineers tolerate jargon, abbreviations, and implicit context. Business users do not. A script that says "POST to /api/orders with the cart payload" is unusable for the business analyst running UAT.

A good UAT script reads like a customer support guide: short steps, plain language, screenshots when needed.


Required structure of a UAT script

Six sections cover most needs.

1. Title

Specific. "Place an order using a saved credit card" beats "Test checkout."

2. Acceptance criterion

The single thing this script verifies, written in Given/When/Then or equivalent.

3. Preconditions

What must be true before starting: a logged-in user, a populated cart, a specific account state.

4. Steps

Numbered, with one action per step. Each step starts with a verb. "Click the Buy Now button on the product page."

5. Expected result

What should happen after each step or after the full sequence. Written observably: "An order confirmation page appears with order number," not "the order is created."

6. Result and notes

A space for the participant to record pass/fail and any observations.


A working template

## UAT Script: Place an order using a saved credit card

### Acceptance criterion
Given a logged-in customer with at least one saved credit card,
when they complete checkout,
then the order is placed using the saved card without re-entering details.

### Preconditions
- Logged in as the test customer (uat-customer-1@example.com).
- At least one saved credit card on the account.
- At least one item in the cart.

### Steps
1. Click "Cart" in the top navigation.
2. Click "Proceed to Checkout."
3. Verify the saved card is selected by default.
4. Verify the order total is correct.
5. Click "Place Order."

### Expected result
- The order confirmation page appears.
- An order number is displayed.
- An order confirmation email arrives within 5 minutes.

### Result
- [ ] Pass
- [ ] Fail
- Notes:

This template works for most non-trivial flows. Keep it short. A 30-step script is a sign that the flow itself needs to be broken into smaller verifiable units.


What to avoid

Technical jargon. "Validate the API response schema" is engineer language. UAT participants do not validate schemas. They verify outcomes they can see.

Implicit context. "Navigate to the dashboard" assumes the participant knows where the dashboard is. "Click 'Dashboard' in the left sidebar" does not.

Combined steps. "Log in and navigate to settings" is two actions. Split into two numbered steps so the participant knows where to stop if something fails.

Vague outcomes. "The order is created" is invisible. "The order confirmation page appears with a 6-digit order number" is observable.

Missing screenshots. For unfamiliar flows, a screenshot per major step prevents misinterpretation. Cheap to capture, expensive to skip.


Linking scripts to acceptance criteria

Every UAT script should link back to a specific acceptance criterion in the user story. The link goes both ways:

  • The story points to the script that verifies its criterion.
  • The script names the story it traces to.

This traceability is what makes UAT defensible in audits and useful in retrospectives. Without it, a passing UAT does not actually prove anything specific.


Storing and managing UAT scripts

Three patterns work, choose based on scale.

Confluence or Notion docs. Fine for small teams. Each script is a page. Easy to write, hard to manage version control.

TestRail, Zephyr, Xray. Test case management tools designed for this. Track results, version scripts, generate reports.

Markdown in the repo. Scripts live next to the code, version controlled with the build. Lightweight and powerful for engineering-led teams.

There is no one right answer. The pattern that fits your team's existing workflow wins.


Automating UAT script execution

Most UAT must remain human. Some parts can be automated.

For scripts that produce purely observable, deterministic outcomes, a tool like Bug0 can run the same script as an automated check on every deploy. The human UAT then focuses on judgment-heavy aspects: usability, copy, business fit.

The benefit: regression risk drops without changing the UAT cycle. UAT becomes a check on intent, not on whether the build still works.


FAQs

How long should a UAT script be?

Most useful scripts are 5 to 15 steps. Shorter feels incomplete. Longer becomes hard to follow.

Should UAT scripts include negative tests?

Yes, for the cases business users are likely to encounter. "What happens if my saved card is expired" is a real scenario. "What happens if I send malformed JSON" is not a UAT concern.

Who writes UAT scripts?

QA usually drafts them with input from the business analyst or product manager. The business owner reviews to make sure the language and scope match the user's perspective.

How many UAT scripts do I need?

One per acceptance criterion. Most user stories have 3 to 7 criteria, so a story typically needs 3 to 7 scripts.

How does Bug0 reduce UAT scripting effort?

Bug0 tests the same flows automatically. Many scripts that were once human-only become automated checks, with UAT focusing only on the parts that need human judgment.

Ship every deploy with confidence.

Bug0 gives you a dedicated AI QA engineer that tests every critical flow, on every PR, with zero test code to maintain. 200+ engineering teams already made the switch.

From $2,500/mo. Full coverage in 7 days.

Go on vacation. Bug0 never sleeps. - Your AI QA engineer runs 24/7

Go on vacation.
Bug0 never sleeps.

Your AI QA engineer runs 24/7 — on every commit, every deploy, every schedule. Full coverage while you're off the grid.