What is A/B Testing?
A method of comparing two versions of a webpage, feature, or experience to determine which performs better.
A/B testing (also called split testing) is a method of comparing two versions of something to determine which performs better. Users are randomly assigned to either version A (control) or version B (variant), and their behavior is measured against defined success metrics.
How A/B Testing Works
- Hypothesis: Form a clear hypothesis about what change will improve a metric
- Variants: Create the control (current version) and one or more variants
- Randomization: Split traffic randomly between variants
- Measurement: Track the key metric for each group
- Analysis: Determine if the difference is statistically significant
- Decision: Roll out the winner or iterate
Common Metrics
- Conversion rate
- Click-through rate
- Revenue per user
- Engagement time
- Retention rate
Best Practices
Start with a hypothesis: "We believe that [change] will [outcome] because [reason]."
Calculate sample size: Know how many users you need before starting.
Run to completion: Avoid peeking at results and stopping early.
Document learnings: Win or lose, capture what you learned.
Learn More
Related Terms
A measure of confidence that an observed difference between variants is real and not due to random chance.
The rate at which a team runs and completes experiments, typically measured as experiments per month.
The baseline group in an experiment that receives the current experience, used for comparison against test variants.