Catalog
coreyhaines31/ab-test-setup

coreyhaines31

ab-test-setup

When the user wants to plan, design, or implement an A/B test or experiment, or build a growth experimentation program. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," "hypothesis," "should I test this," "which version is better," "test two versions," "statistical significance," "how long should I run this test," "growth experiments," "experiment velocity," "experiment backlog," "ICE score," "experimentation program," or "experiment playbook." Use this whenever someone is comparing two approaches and wants to measure which performs better, or when they want to build a systematic experimentation practice. For tracking implementation, see analytics-tracking. For page-level conversion optimization, see page-cro.

global
version:1.2.0
1installs0uses~2.8k
v1.1Saved Apr 20, 2026

A/B Test Setup

You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.

Initial Assessment

Check for product marketing context first: If .agents/product-marketing-context.md exists (or .claude/product-marketing-context.md in older setups), read it before asking questions. Use that context and only ask for information not already covered or specific to this task.

Before designing a test, understand:

  1. Test Context - What are you trying to improve? What change are you considering?
  2. Current State - Baseline conversion rate? Current traffic volume?
  3. Constraints - Technical complexity? Timeline? Tools available?

Core Principles

1. Start with a Hypothesis

  • Not just "let's see what happens"
  • Specific prediction of outcome
  • Based on reasoning or data

2. Test One Thing

  • Single variable per test
  • Otherwise you don't know what worked

3. Statistical Rigor

  • Pre-determine sample size
  • Don't peek and stop early
  • Commit to the methodology

4. Measure What Matters

  • Primary metric tied to business value
  • Secondary metrics for context
  • Guardrail metrics to prevent harm

Hypothesis Framework

Structure

Because [observation/data],
we believe [change]
will cause [expected outcome]
for [audience].
We'll know this is true when [metrics].

Example

Weak: "Changing the button color might increase clicks."

Strong: "Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."


Test Types

Type Description Traffic Needed
A/B Two versions, single change Moderate
A/B/n Multiple variants Higher
MVT Multiple changes in combinations Very high
Split URL Different URLs for variants Moderate

Sample Size

Quick Reference

Baseline 10% Lift 20% Lift 50% Lift
1% 150k/variant 39k/variant 6k/variant
3% 47k/variant 12k/variant 2k/variant
5% 27k/variant 7k/variant 1.2k/variant
10% 12k/variant 3k/variant 550/variant

Calculators:

For detailed sample size tables and duration calculations: See references/sample-size-guide.md


Metrics Selection

Primary Metric

  • Single metric that matters most
  • Directly tied to hypothesis
  • What you'll use to call the test

Secondary Metrics

  • Support primary metric interpretation
  • Explain why/how the change worked

Guardrail Metrics

  • Things that shouldn't get worse
  • Stop test if significantly negative

Example: Pricing Page Test

  • Primary: Plan selection rate
  • Secondary: Time on page, plan distribution
  • Guardrail: Support tickets, refund rate

Designing Variants

What to Vary

Category Examples
Headlines/Copy Message angle, value prop, specificity, tone
Visual Design Layout, color, images, hierarchy
CTA Button copy, size, placement, number
Content Information included, order, amount, social proof

Best Practices

  • Single, meaningful change
  • Bold enough to make a difference
  • True to the hypothesis

Traffic Allocation

Approach Split When to Use
Standard 50/50 Default for A/B
Conservative 90/10, 80/20 Limit risk of bad variant
Ramping Start small, increase Technical risk mitigation

Considerations:

  • Consistency: Users see same variant on return
  • Balanced exposure across time of day/week

Implementation

Client-Side

  • JavaScript modifies page after load
  • Quick to implement, can cause flicker
  • Tools: PostHog, Optimizely, VWO

Server-Side

  • Variant determined before render
  • No flicker, requires dev work
  • Tools: PostHog, LaunchDarkly, Split

Running the Test

Pre-Launch Checklist

  • Hypothesis documented
  • Primary metric defined
  • Sample size calculated
  • Variants implemented correctly
  • Tracking verified
  • QA completed on all variants

During the Test

DO:

  • Monitor for technical issues
  • Check segment quality
  • Document external factors

Avoid:

  • Peek at results and stop early
  • Make changes to variants
  • Add traffic from new sources

The Peeking Problem

Looking at results before reaching sample size and stopping early leads to false positives and wrong decisions. Pre-commit to sample size and trust the process.


Analyzing Results

Statistical Significance

  • 95% confidence = p-value < 0.05
  • Means <5% chance result is random
  • Not a guarantee—just a threshold

Analysis Checklist

  1. Reach sample size? If not, result is preliminary
  2. Statistically significant? Check confidence intervals
  3. Effect size meaningful? Compare to MDE, project impact
  4. Secondary metrics consistent? Support the primary?
  5. Guardrail concerns? Anything get worse?
  6. Segment differences? Mobile vs. desktop? New vs. returning?

Interpreting Results

Result Conclusion
Significant winner Implement variant
Significant loser Keep control, learn why
No significant difference Need more traffic or bolder test
Mixed signals Dig deeper, maybe segment

Documentation

Document every test with:

  • Hypothesis
  • Variants (with screenshots)
  • Results (sample, metrics, significance)
  • Decision and learnings

For templates: See references/test-templates.md


Growth Experimentation Program

Individual tests are valuable. A continuous experimentation program is a compounding asset. This section covers how to run experiments as an ongoing growth engine, not just one-off tests.

The Experiment Loop

1. Generate hypotheses (from data, research, competitors, customer feedback)
2. Prioritize with ICE scoring
3. Design and run the test
4. Analyze results with statistical rigor
5. Promote winners to a playbook
6. Generate new hypotheses from learnings
→ Repeat

Hypothesis Generation

Feed your experiment backlog from multiple sources:

Source What to Look For
Analytics Drop-off points, low-converting pages, underperforming segments
Customer research Pain points, confusion, unmet expectations
Competitor analysis Features, messaging, or UX patterns they use that you don't
Support tickets Recurring questions or complaints about conversion flows
Heatmaps/recordings Where users hesitate, rage-click, or abandon
Past experiments "Significant loser" tests often reveal new angles to try

ICE Prioritization

Score each hypothesis 1-10 on three dimensions:

Dimension Question
Impact If this works, how much will it move the primary metric?
Confidence How sure are we this will work? (Based on data, not gut.)
Ease How fast and cheap can we ship and measure this?

ICE Score = (Impact + Confidence + Ease) / 3

Run highest-scoring experiments first. Re-score monthly as context changes.

Experiment Velocity

Track your experimentation rate as a leading indicator of growth:

Metric Target
Experiments launched per month 4-8 for most teams
Win rate 20-30% is common for mature programs (sustained higher rates may indicate conservative hypotheses)
Average test duration 2-4 weeks
Backlog depth 20+ hypotheses queued
Cumulative lift Compound gains from all winners

The Experiment Playbook

When a test wins, don't just implement it — document the pattern:

## [Experiment Name]
**Date**: [date]
**Hypothesis**: [the hypothesis]
**Sample size**: [n per variant]
**Result**: [winner/loser/inconclusive] — [primary metric] changed by [X%] (95% CI: [range], p=[value])
**Guardrails**: [any guardrail metrics and their outcomes]
**Segment deltas**: [notable differences by device, segment, or cohort]
**Why it worked/failed**: [analysis]
**Pattern**: [the reusable insight — e.g., "social proof near pricing CTAs increases plan selection"]
**Apply to**: [other pages/flows where this pattern might work]
**Status**: [implemented / parked / needs follow-up test]

Over time, your playbook becomes a library of proven growth patterns specific to your product and audience.

Experiment Cadence

Weekly (30 min): Review running experiments for technical issues and guardrail metrics. Don't call winners early — but do stop tests where guardrails are significantly negative.

Bi-weekly: Conclude completed experiments. Analyze results, update playbook, launch next experiment from backlog.

Monthly (1 hour): Review experiment velocity, win rate, cumulative lift. Replenish hypothesis backlog. Re-prioritize with ICE.

Quarterly: Audit the playbook. Which patterns have been applied broadly? Which winning patterns haven't been scaled yet? What areas of the funnel are under-tested?


Common Mistakes

Test Design

  • Testing too small a change (undetectable)
  • Testing too many things (can't isolate)
  • No clear hypothesis

Execution

  • Stopping early
  • Changing things mid-test
  • Not checking implementation

Analysis

  • Ignoring confidence intervals
  • Cherry-picking segments
  • Over-interpreting inconclusive results

Task-Specific Questions

  1. What's your current conversion rate?
  2. How much traffic does this page get?
  3. What change are you considering and why?
  4. What's the smallest improvement worth detecting?
  5. What tools do you have for testing?
  6. Have you tested this area before?

  • page-cro: For generating test ideas based on CRO principles
  • analytics-tracking: For setting up test measurement
  • copywriting: For creating variant copy
Files4
4 files · 22.4 KB

Select a file to preview

Overall Score

89/100

Grade

A

Excellent

Safety

94

Quality

88

Clarity

92

Completeness

81

Summary

This skill guides users through designing, running, and analyzing A/B tests and building continuous experimentation programs. It provides frameworks for hypothesis formation, sample size calculation, metrics selection, statistical rigor, and result interpretation, along with templates and tools for documenting and scaling experimentation practices.

Detected Capabilities

Hypothesis generation and validation using structured frameworksSample size and test duration calculation with reference tablesMetrics selection and definition (primary, secondary, guardrail)Statistical significance and confidence interval interpretationMultivariate test design and traffic multiplier calculationExperiment prioritization and ICE scoring methodologyResults documentation and playbook creation for pattern reuseTest implementation guidance (client-side vs server-side)Common mistake identification and preventionExperimentation program management and cadence recommendations

Trigger Keywords

Phrases that MCP clients use to match this skill to user intent.

design A/B teststatistical significancesample size calculationexperiment hypothesisconversion rate optimizationtest results interpretationexperimentation programmultivariate testtest duration planninggrowth experiment velocity

Risk Signals

INFO

External domain references for sample size calculators (vwo.com, optimizely.com, evanmiller.org, abtestguide.com)

SKILL.md: Sample Size section and references/sample-size-guide.md
INFO

Skill directs users to check for external context file (.agents/product-marketing-context.md or .claude/product-marketing-context.md)

SKILL.md: Initial Assessment section
INFO

References to external testing tools (PostHog, Optimizely, VWO, LaunchDarkly, Split) but does not execute or interact with them

SKILL.md: Implementation section

Referenced Domains

External domains referenced in skill content, detected by static analysis.

vwo.comwww.abtestguide.comwww.evanmiller.orgwww.optimizely.com

Use Cases

  • Design and launch a single A/B test with proper statistical rigor
  • Build a continuous experimentation program with hypothesis generation and prioritization
  • Calculate sample size and test duration based on traffic and baseline metrics
  • Define primary, secondary, and guardrail metrics for a test
  • Analyze test results and interpret statistical significance
  • Document test results and extract reusable growth patterns into a playbook
  • Prioritize experiment backlog using ICE scoring
  • Train teams on experimentation best practices and common mistakes

Quality Notes

  • Excellent hypothesis framework with clear structure ('Because... we believe... will cause... we'll know when...')
  • Comprehensive sample size reference tables for quick decision-making, reducing calculation burden
  • Well-organized metrics selection framework with primary/secondary/guardrail tiers that prevents common optimization pitfalls
  • Strong emphasis on statistical rigor and the 'peeking problem' — directly addresses a critical experimentation mistake
  • Practical guidance on test types (A/B, A/B/n, MVT, split URL) with traffic requirements for each
  • Includes growth program structure beyond individual tests, showing cadence (weekly, bi-weekly, monthly, quarterly)
  • ICE prioritization framework is clear and actionable for experiment backlog management
  • Excellent use of tables, checklists, and examples for clarity and reference
  • Strong caveats section on common mistakes (test design, execution, analysis) helps users avoid pitfalls
  • Templates in references/ are comprehensive and cover the full lifecycle (plan → results → repository → stakeholder updates)
  • Segment analysis guidance addresses real-world complexity (mobile vs desktop, new vs returning)
  • The playbook structure ('Pattern → Apply to → Status') creates institutional learning from test results
  • Experiment velocity metrics provide leading indicators of program health, not just test success rate
  • Clear scope boundaries: defers copywriting tasks to copywriting skill, analytics setup to analytics-tracking skill, CRO strategy to page-cro skill
  • Evals file includes 7 well-designed scenarios testing hypothesis formation, peeking problem awareness, multivariate complexity, metrics selection, result interpretation, and skill boundary recognition
Model: claude-haiku-4-5-20251001Analyzed: Apr 20, 2026

Reviews

Add this skill to your library to leave a review.

No reviews yet

Be the first to share your experience.

Version History

v1.1

Content updated

2026-04-20

Latest
v1.0

No changelog

2026-04-19

Add coreyhaines31/ab-test-setup to your library

Command Palette

Search for a command to run...

coreyhaines31/ab-test-setup | SkillRepo