Two ways to test a login flow.
Script-based:
await page.click('[data-testid="email-input"]');
await page.fill('[data-testid="email-input"]', 'user@test.com');
await page.click('[data-testid="password-input"]');
await page.fill('[data-testid="password-input"]', 'secret123');
await page.click('[data-testid="login-btn"]');
await page.waitForSelector('.dashboard-header');
Outcome-based:
Enter email and password, click Log In, verify the dashboard loads.
Same test. Same coverage. One breaks when you rename a div. The other doesn't care.
Script-based testing encodes how your UI works right now. Every selector is a bet that the implementation won't change. Rename a component, swap a library, redesign a page — tests break. Not because the feature broke. Because the implementation moved.
Outcome-based testing encodes what should happen. The AI figures out the how. And when the how changes, it figures it out again.
This is the shift Bug0 Studio is built on. Testing should describe intent, not implementation.
Your PM doesn't write acceptance criteria in XPath. They write "user should be able to log in and see their dashboard." That's the test. Everything between the intent and the assertion is an implementation detail.
Let the AI own implementation details. You own outcomes.
Script-based testing was the best we had when machines couldn't understand English. Now they can.




