Back to Glossary
    Experimentation
    Updated May 15, 2025

    What is A/B Testing?

    A method of comparing two versions of a webpage, feature, or experience to determine which performs better.

    A/B testing (also called split testing) is a method of comparing two versions of something to determine which performs better. Users are randomly assigned to either version A (control) or version B (variant), and their behavior is measured against defined success metrics.

    How A/B Testing Works

    • Hypothesis: Form a clear hypothesis about what change will improve a metric
    • Variants: Create the control (current version) and one or more variants
    • Randomization: Split traffic randomly between variants
    • Measurement: Track the key metric for each group
    • Analysis: Determine if the difference is statistically significant
    • Decision: Roll out the winner or iterate

    Common Metrics

    • Conversion rate
    • Click-through rate
    • Revenue per user
    • Engagement time
    • Retention rate

    Best Practices

    Start with a hypothesis: "We believe that [change] will [outcome] because [reason]."

    Calculate sample size: Know how many users you need before starting.

    Run to completion: Avoid peeking at results and stopping early.

    Document learnings: Win or lose, capture what you learned.

    Put A/B Testing into practice

    See how ExperimentBets helps teams apply experimentation gamification.