Catalog
affaan-m/browser-qa

affaan-m

browser-qa

Use this skill to automate visual testing and UI interaction verification using browser automation after deploying features.

global
0installs0uses~632
v1.1Saved Apr 20, 2026

Browser QA — Automated Visual Testing & Interaction

When to Use

  • After deploying a feature to staging/preview
  • When you need to verify UI behavior across pages
  • Before shipping — confirm layouts, forms, interactions actually work
  • When reviewing PRs that touch frontend code
  • Accessibility audits and responsive testing

How It Works

Uses the browser automation MCP (claude-in-chrome, Playwright, or Puppeteer) to interact with live pages like a real user.

Phase 1: Smoke Test

1. Navigate to target URL
2. Check for console errors (filter noise: analytics, third-party)
3. Verify no 4xx/5xx in network requests
4. Screenshot above-the-fold on desktop + mobile viewport
5. Check Core Web Vitals: LCP < 2.5s, CLS < 0.1, INP < 200ms

Phase 2: Interaction Test

1. Click every nav link — verify no dead links
2. Submit forms with valid data — verify success state
3. Submit forms with invalid data — verify error state
4. Test auth flow: login → protected page → logout
5. Test critical user journeys (checkout, onboarding, search)

Phase 3: Visual Regression

1. Screenshot key pages at 3 breakpoints (375px, 768px, 1440px)
2. Compare against baseline screenshots (if stored)
3. Flag layout shifts > 5px, missing elements, overflow
4. Check dark mode if applicable

Phase 4: Accessibility

1. Run axe-core or equivalent on each page
2. Flag WCAG AA violations (contrast, labels, focus order)
3. Verify keyboard navigation works end-to-end
4. Check screen reader landmarks

Output Format

## QA Report — [URL] — [timestamp]

### Smoke Test
- Console errors: 0 critical, 2 warnings (analytics noise)
- Network: all 200/304, no failures
- Core Web Vitals: LCP 1.2s ✓, CLS 0.02 ✓, INP 89ms ✓

### Interactions
- [✓] Nav links: 12/12 working
- [✗] Contact form: missing error state for invalid email
- [✓] Auth flow: login/logout working

### Visual
- [✗] Hero section overflows on 375px viewport
- [✓] Dark mode: all pages consistent

### Accessibility
- 2 AA violations: missing alt text on hero image, low contrast on footer links

### Verdict: SHIP WITH FIXES (2 issues, 0 blockers)

Integration

Works with any browser MCP:

  • mChild__claude-in-chrome__* tools (preferred — uses your actual Chrome)
  • Playwright via mcp__browserbase__*
  • Direct Puppeteer scripts

Pair with /canary-watch for post-deploy monitoring.

Files1
1 files · 1.0 KB

Select a file to preview

Overall Score

82/100

Grade

B

Good

Safety

95

Quality

78

Clarity

85

Completeness

72

Summary

This skill guides AI agents through automated visual and interaction testing of web applications using browser automation. It orchestrates a four-phase QA workflow: smoke testing (console errors, network health, Core Web Vitals), interaction testing (form submission, auth flows, user journeys), visual regression (responsive design across breakpoints), and accessibility audits (WCAG compliance, keyboard navigation). The skill outputs a structured QA report with a final verdict on readiness to ship.

Detected Capabilities

Browser automation and page navigationScreenshot capture at multiple viewport sizesConsole error and network request monitoringCore Web Vitals measurement (LCP, CLS, INP)Form interaction testing (valid/invalid submissions)Authentication flow verificationAccessibility scanning (axe-core pattern reference)Dark mode variant testingVisual regression detection via screenshot comparisonStructured QA report generation

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

qa testing automationvisual regression testingpost-deploy verificationaccessibility auditform interaction testing

Use Cases

  • Verify UI behavior immediately after deploying to staging/preview environments
  • Validate form interactions, error states, and success states across user flows
  • Detect visual regressions and layout shifts across desktop and mobile viewports
  • Run accessibility audits to ensure WCAG AA compliance before release
  • Perform smoke testing to catch console errors and network failures post-deployment

Quality Notes

  • Clear four-phase structure with well-defined scope for each testing layer
  • Concrete output format provided with real-world example report (includes verdict logic)
  • Scope explicitly bounded to browser automation via documented MCPs (claude-in-chrome, Playwright, Browserbase)
  • Filters for console error noise (analytics, third-party) show pragmatic understanding of real deployments
  • Responsive breakpoints (375px, 768px, 1440px) and Core Web Vitals thresholds are specific and actionable
  • Paired integration guidance (`/canary-watch` for post-deploy monitoring) connects to related skills
  • No destructive operations — purely read-only observation and reporting
  • Error handling implicit in verdict logic (SHIP vs. SHIP WITH FIXES) but could be more explicit about recovery steps
  • Dark mode testing mentioned but not detailed; guidance on how to trigger/verify dark mode would strengthen Phase 3
  • Visual regression baseline comparison assumes stored baselines — no guidance on how baselines are managed or updated
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v1.1

Content updated

2026-04-20

Latest
v1.0

No changelog

2026-04-12

Add affaan-m/browser-qa to your library

Command Palette

Search for a command to run...