tldr: Selenium has no built-in visual regression testing. You need external libraries like SeleniumBase, Pillow, or cloud platforms like Applitools and Percy to compare screenshots between builds. It works, but the setup cost is real.
Selenium wasn't built for visual testing
Selenium WebDriver automates browsers. It clicks buttons, fills forms, and asserts text. What it doesn't do is compare how your UI looks between deploys.
Visual regression testing catches unintended visual changes. A shifted button, a broken layout, a font that swapped from 14px to 16px. These are bugs that functional tests miss entirely.
To do visual regression testing with Selenium, you need to bolt on screenshot capture, image storage, and pixel comparison. That's either a library that handles it for you, or a cloud platform with a Selenium SDK. Either way, it's extra infrastructure.
The Python approach: SeleniumBase
SeleniumBase is the most popular Python framework for Selenium visual regression testing in 2026. It wraps Selenium WebDriver and adds a check_window() method that handles baseline creation, screenshot capture, and comparison in a single call.
Here's a complete example:
from seleniumbase import BaseCase
BaseCase.main(__name__, __file__)
class VisualRegressionTest(BaseCase):
def test_homepage_visual(self):
self.open("https://your-app.com")
self.check_window(name="homepage", level=2)
def test_login_page_visual(self):
self.open("https://your-app.com/login")
self.wait_for_element("#login-form")
self.check_window(name="login_page", level=2)
def test_dashboard_after_login(self):
self.open("https://your-app.com/login")
self.type("#email", "test@example.com")
self.type("#password", "password123")
self.click('button[type="submit"]')
self.wait_for_element(".dashboard-content")
self.check_window(name="dashboard", level=2)
The level parameter controls sensitivity:
- Level 1: Runs
check_window()but doesn't compare. Use this to create initial baselines. - Level 2: Compares the current screenshot against the saved baseline. Logs any differences found.
- Level 3: Same as level 2, but fails the test if differences exceed the threshold.
On the first run, SeleniumBase saves baseline images to a visual_baseline/ directory. Subsequent runs compare against those baselines. When your UI intentionally changes, delete the old baselines and re-run at level 1.
Install it with:
pip install seleniumbase
That's it. No separate image libraries, no manual screenshot management. SeleniumBase handles the full pipeline.
Rolling your own with Pillow and pixelmatch
If you need more control than SeleniumBase provides, you can build a custom visual comparison pipeline. This is common when teams want custom diff thresholds, region masking, or integration with an existing test harness.
from selenium import webdriver
from PIL import Image
import pixelmatch
from pixelmatch.contrib.PIL import pixelmatch as pil_pixelmatch
import os
def capture_screenshot(url, filename):
driver = webdriver.Chrome()
driver.set_window_size(1920, 1080)
driver.get(url)
driver.save_screenshot(filename)
driver.quit()
def compare_images(baseline_path, current_path, diff_path):
baseline = Image.open(baseline_path)
current = Image.open(current_path)
# Images must be the same size
width, height = baseline.size
diff_image = Image.new("RGBA", (width, height))
mismatch_count = pil_pixelmatch(
baseline, current, diff_image,
width, height,
threshold=0.1,
includeAA=True
)
diff_image.save(diff_path)
total_pixels = width * height
diff_percentage = (mismatch_count / total_pixels) * 100
return diff_percentage
# Usage
capture_screenshot("https://your-app.com", "current.png")
if os.path.exists("baseline.png"):
diff_pct = compare_images("baseline.png", "current.png", "diff.png")
print(f"Visual difference: {diff_pct:.2f}%")
if diff_pct > 0.5:
raise AssertionError(f"Visual regression detected: {diff_pct:.2f}% difference")
else:
os.rename("current.png", "baseline.png")
print("Baseline created.")
Pillow handles image loading and manipulation. pixelmatch does the actual pixel-by-pixel comparison and generates a diff image highlighting changed areas. The threshold parameter (0.0 to 1.0) controls anti-aliasing tolerance.
This approach gives you full control. You decide what percentage of changed pixels is acceptable. You choose where baselines are stored. You handle the CI integration yourself.
The downside: you maintain all of it yourself. Baseline management, viewport consistency, dynamic content masking. These are solved problems in dedicated visual regression testing tools, but here you're building from scratch.
Java: Selenium with AShot
Java teams typically use AShot for Selenium visual regression testing. AShot captures full-page screenshots (including areas below the fold) and provides an image comparison API.
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import ru.yandex.qatools.ashot.AShot;
import ru.yandex.qatools.ashot.Screenshot;
import ru.yandex.qatools.ashot.comparison.ImageDiff;
import ru.yandex.qatools.ashot.comparison.ImageDiffer;
import ru.yandex.qatools.ashot.shooting.ShootingStrategies;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;
public class VisualRegressionTest {
public static void main(String[] args) throws Exception {
WebDriver driver = new ChromeDriver();
driver.get("https://your-app.com");
// Full-page screenshot with scrolling
Screenshot screenshot = new AShot()
.shootingStrategy(ShootingStrategies.viewportPasting(100))
.takeScreenshot(driver);
File baselineFile = new File("baseline.png");
if (baselineFile.exists()) {
BufferedImage baselineImage = ImageIO.read(baselineFile);
ImageDiff diff = new ImageDiffer().makeDiff(
new Screenshot(baselineImage),
screenshot
);
if (diff.hasDiff()) {
ImageIO.write(diff.getMarkedImage(), "PNG", new File("diff.png"));
System.out.println("Diff pixel count: " + diff.getDiffSize());
throw new AssertionError("Visual regression detected");
}
System.out.println("No visual differences found.");
} else {
ImageIO.write(screenshot.getImage(), "PNG", baselineFile);
System.out.println("Baseline created.");
}
driver.quit();
}
}
Add AShot to your Maven project:
<dependency>
<groupId>ru.yandex.qatools.ashot</groupId>
<artifactId>ashot</artifactId>
<version>1.5.4</version>
</dependency>
AShot's viewportPasting strategy scrolls the page and stitches screenshots together. This is critical for long pages where a single viewport screenshot would miss content below the fold. The ImageDiffer class compares the two images and produces a diff image with changed areas highlighted in red.
.NET: Visual Studio with Selenium WebDriver
When people search for "visual studio regression testing," they're usually looking to run Selenium visual regression tests inside Visual Studio IDE. This isn't a standalone Microsoft product. It's the combination of Visual Studio, a .NET test framework (NUnit or MSTest), and Selenium WebDriver.
Here's a practical setup with NUnit:
using NUnit.Framework;
using OpenQA.Selenium;
using OpenQA.Selenium.Chrome;
using System.Drawing;
using System.IO;
namespace VisualRegressionTests
{
[TestFixture]
public class HomepageVisualTests
{
private IWebDriver _driver;
private string _baselinePath = "baselines";
private string _currentPath = "current";
[SetUp]
public void Setup()
{
_driver = new ChromeDriver();
_driver.Manage().Window.Size = new Size(1920, 1080);
Directory.CreateDirectory(_baselinePath);
Directory.CreateDirectory(_currentPath);
}
[Test]
public void Homepage_ShouldMatchBaseline()
{
_driver.Navigate().GoToUrl("https://your-app.com");
var screenshot = ((ITakesScreenshot)_driver).GetScreenshot();
var currentFile = Path.Combine(_currentPath, "homepage.png");
screenshot.SaveAsFile(currentFile);
var baselineFile = Path.Combine(_baselinePath, "homepage.png");
if (!File.Exists(baselineFile))
{
File.Copy(currentFile, baselineFile);
Assert.Pass("Baseline created. Run the test again to compare.");
return;
}
// Use ImageSharp or similar for pixel comparison
var isMatch = CompareImages(baselineFile, currentFile);
Assert.That(isMatch, Is.True, "Visual regression detected on homepage");
}
private bool CompareImages(string baseline, string current)
{
var baselineBytes = File.ReadAllBytes(baseline);
var currentBytes = File.ReadAllBytes(current);
if (baselineBytes.Length != currentBytes.Length) return false;
int diffCount = 0;
for (int i = 0; i < baselineBytes.Length; i++)
{
if (baselineBytes[i] != currentBytes[i]) diffCount++;
}
double diffPercentage = (double)diffCount / baselineBytes.Length * 100;
return diffPercentage < 0.5;
}
[TearDown]
public void Teardown()
{
_driver.Quit();
}
}
}
For production-grade pixel comparison in .NET, swap out the byte-level comparison for SixLabors.ImageSharp or a dedicated library. The byte comparison above is a starting point. It works for detecting obvious regressions but won't handle anti-aliasing differences well.
Install the Selenium packages via NuGet:
dotnet add package Selenium.WebDriver
dotnet add package Selenium.WebDriver.ChromeDriver
dotnet add package NUnit
dotnet add package NUnit3TestAdapter
This runs directly in Visual Studio's Test Explorer. Set up a CI pipeline with dotnet test to catch regressions on every pull request.
Cloud platforms with Selenium SDKs
If you don't want to manage baselines, diff images, and threshold tuning yourself, cloud platforms handle the visual comparison infrastructure. You keep writing Selenium tests. The platform captures, stores, and compares screenshots.
Applitools Eyes
The most established visual testing platform. Its Selenium SDK integrates with a few lines of code:
from selenium import webdriver
from applitools.selenium import Eyes
driver = webdriver.Chrome()
eyes = Eyes()
eyes.api_key = "YOUR_API_KEY"
eyes.open(driver, "Your App", "Homepage Test", {"width": 1920, "height": 1080})
driver.get("https://your-app.com")
eyes.check_window("Homepage")
eyes.close()
driver.quit()
Applitools uses AI-powered visual comparison. It distinguishes meaningful layout changes from irrelevant rendering differences. This reduces false positives compared to pixel-level comparison. Pricing starts around $500/month for small teams.
Percy (BrowserStack)
Percy integrates with Selenium through its SDK. It captures snapshots and renders them across multiple browsers and viewport sizes in the cloud.
from selenium import webdriver
import percy
driver = webdriver.Chrome()
driver.get("https://your-app.com")
percy_snapshot(driver, "Homepage")
driver.quit()
Percy's value is cross-browser visual testing. One snapshot gets rendered in Chrome, Firefox, Safari, and different viewport sizes. Pricing is per-snapshot, starting at $399/month.
TestMu AI SmartUI
Formerly LambdaTest Visual UI Testing, TestMu AI SmartUI offers Selenium-integrated visual regression. It runs your Selenium tests on their cloud grid and captures visual comparisons automatically.
from selenium import webdriver
desired_caps = {
"platform": "Windows 10",
"browserName": "Chrome",
"version": "latest",
"smartUI.project": "Your Project",
"smartUI.build": "Build #1"
}
driver = webdriver.Remote(
command_executor="https://hub.testmu.ai/wd/hub",
desired_capabilities=desired_caps
)
driver.get("https://your-app.com")
driver.execute_script("smartui.takeScreenshot=homepage")
driver.quit()
TestingBot VRT
TestingBot provides visual regression testing on their Selenium grid. Screenshots are captured during test runs and compared against baselines in their dashboard. Setup is similar to other cloud platforms. You point your RemoteWebDriver at TestingBot's hub and enable visual snapshots.
Comparing the approaches
| Approach | Setup effort | Maintenance | Cost | Best for |
|---|---|---|---|---|
| SeleniumBase (Python) | Low | Medium | Free | Python teams wanting quick VRT |
| Pillow + pixelmatch | High | High | Free | Teams needing custom comparison logic |
| AShot (Java) | Medium | Medium | Free | Java teams with existing Selenium suites |
| NUnit + Selenium (.NET) | Medium | High | Free | .NET teams using Visual Studio |
| Applitools Eyes | Low | Low | ~$500/mo+ | Teams wanting AI-powered comparison |
| Percy | Low | Low | $399/mo+ | Teams needing cross-browser VRT |
| TestMu AI SmartUI | Low | Low | Varies | Teams already on the TestMu grid |
The open source visual regression testing tools column is free but costs engineering time. Cloud platforms cost money but eliminate infrastructure work.
The hard parts of Selenium VRT
Dynamic content
Timestamps, user avatars, ads, animations. These change between runs and create false positives. You need to either mask these regions before comparison or wait for animations to complete before capturing.
In SeleniumBase, you can use CSS injection to hide dynamic elements:
self.execute_script("""
document.querySelector('.timestamp').style.visibility = 'hidden';
document.querySelector('.avatar').style.visibility = 'hidden';
""")
self.check_window(name="dashboard_masked", level=3)
Viewport and browser consistency
Screenshots taken on macOS Chrome look different from screenshots on Linux Chrome. Font rendering, sub-pixel anti-aliasing, and scrollbar styles vary by OS. Run your VRT suite in Docker containers with a fixed OS and browser version for consistent results.
Baseline management
Baselines are the source of truth. When your UI intentionally changes, you need to update them. With 200+ visual tests, that's 200+ images to review and approve. Cloud platforms solve this with visual review dashboards. The DIY approach means manually deleting and regenerating baseline files.
Full-page screenshots
Selenium's get_screenshot_as_png() only captures the viewport. For full-page screenshots, you need to scroll and stitch. AShot handles this in Java. In Python, you either use SeleniumBase's built-in support or write scrolling logic yourself.
Selenium VRT vs. Playwright VRT
This is the honest comparison. Playwright has built-in visual regression testing. One method call: expect(page).toHaveScreenshot(). It handles baseline creation, comparison, diff generation, and threshold configuration out of the box.
Selenium has none of that. Every piece must be assembled from external libraries and platforms.
For new projects starting in 2026, Playwright's built-in toHaveScreenshot() is the simpler choice. You get pixel comparison, configurable thresholds, auto-generated baselines, and CI-friendly reporting without any third-party dependencies.
For existing Selenium projects with hundreds of tests already written, migrating to Playwright just for VRT doesn't make sense. Add SeleniumBase or a cloud platform to your existing suite instead.
The trade-off is clear. Selenium VRT requires more setup and more maintenance. Playwright VRT works out of the box. But if you're already invested in Selenium, the ecosystem has enough tools to get it done.
When to move beyond Selenium VRT entirely
If you're spending more time maintaining your visual testing infrastructure than finding actual bugs, the tool is working against you. Selenium VRT setups grow complex: baseline management, dynamic content masking, viewport normalization, CI pipeline configuration. Each layer adds maintenance burden.
Bug0 Studio takes a different approach. Instead of assembling a screenshot comparison pipeline on top of Selenium, it uses AI-driven UI testing to detect visual issues without maintaining baselines at all. The AI understands what your UI should look like and flags anomalies automatically.
For teams that want to stop maintaining Selenium infrastructure entirely, Bug0 Managed provides forward-deployed QA engineers who handle your entire testing pipeline. No baselines. No flaky screenshot diffs. No late-night debugging why your CI failed because Chrome updated its font rendering.
Setting up Selenium VRT in CI
Visual regression tests belong in CI. Running them locally defeats the purpose since you need consistent environments to get reliable comparisons.
Here's a GitHub Actions example for a Python SeleniumBase setup:
name: Visual Regression Tests
on: [pull_request]
jobs:
visual-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: |
pip install seleniumbase
seleniumbase install chromedriver
- name: Run visual tests
run: pytest tests/visual/ --headless
- name: Upload diff artifacts
if: failure()
uses: actions/upload-artifact@v4
with:
name: visual-diffs
path: visual_baseline/
The key detail: run on ubuntu-latest consistently. Never mix operating systems between baseline creation and comparison runs. Store baselines in your Git repository so every developer and CI run uses the same reference images.
FAQs
What is Selenium visual regression testing?
Selenium visual regression testing is the practice of using Selenium WebDriver to capture screenshots of your web application and comparing them against baseline images to detect unintended visual changes. Since Selenium has no built-in VRT capability, you need external libraries (SeleniumBase, AShot, Pillow) or cloud platforms (Applitools, Percy) to handle the comparison.
Can you do visual regression testing in Visual Studio?
Yes. "Visual studio regression testing" means running Selenium WebDriver visual tests inside Visual Studio IDE using .NET test frameworks like NUnit or MSTest. You write C# tests that capture screenshots and compare them against baselines. It's not a standalone Microsoft feature. It's Selenium plus a comparison library running in the Visual Studio test runner.
What is the best Python library for Selenium visual regression testing?
SeleniumBase is the most popular choice in 2026. Its check_window() method handles baseline creation, screenshot capture, and comparison in a single call. For teams that need custom comparison logic, combining Pillow for image processing with pixelmatch for pixel-level diffing gives you full control at the cost of more setup.
How does Selenium VRT compare to Playwright VRT?
Playwright has built-in visual regression testing with toHaveScreenshot(). No external libraries needed. Selenium requires third-party tools for every part of the pipeline: screenshot management, image comparison, and baseline storage. For new projects, Playwright is simpler. For existing Selenium projects, adding SeleniumBase or a cloud platform is the pragmatic choice. See the Playwright VRT guide for details.
How do you handle dynamic content in visual regression tests?
Hide or mask dynamic elements (timestamps, avatars, ads) before capturing screenshots. Inject CSS to set visibility: hidden on those elements. Wait for animations to complete before capturing. Some cloud platforms like Applitools handle dynamic regions automatically using AI comparison.
What are the best cloud platforms for Selenium visual regression testing?
Applitools Eyes, Percy (BrowserStack), TestMu AI SmartUI, and TestingBot all offer Selenium SDKs. Applitools uses AI comparison to reduce false positives. Percy renders snapshots across multiple browsers. TestMu AI SmartUI integrates with their cloud Selenium grid. Pricing ranges from $399/month to $500+/month depending on the platform and usage.
How do you manage baselines in Selenium VRT?
Store baseline images in your Git repository. Generate them on a consistent OS and browser version (use CI or Docker). When your UI intentionally changes, regenerate the affected baselines by running tests at baseline-creation mode (level 1 in SeleniumBase) and committing the new images. Cloud platforms provide visual review dashboards to approve or reject baseline updates.
Is Selenium visual regression testing worth the effort?
It depends on your existing investment. If you have a large Selenium test suite and want to add visual coverage, SeleniumBase or a cloud platform gets you there with reasonable effort. If you're starting fresh and your main goal is visual testing, Playwright's built-in VRT or a dedicated visual regression testing tool will be faster to set up and easier to maintain.