What is Statistical Significance?
A measure of confidence that an observed difference between variants is real and not due to random chance.
Statistical significance is a measure of confidence that an observed difference between experiment variants is real, not random noise. Most A/B tests use a 95% confidence threshold, meaning there is only a 5% chance the result is due to chance.
Understanding P-Values
The p-value represents the probability of seeing the observed result if there were no real difference between variants. A p-value below 0.05 (5%) is typically considered statistically significant.
Sample Size Matters
Statistical significance requires adequate sample size. Running tests with too few users leads to unreliable results. Calculate required sample size before launching.
Common Mistakes
Peeking: Checking results early and stopping when you see significance. This inflates false positive rates.
Multiple comparisons: Testing many metrics increases the chance of finding spurious significance.
Ignoring effect size: A result can be statistically significant but practically meaningless if the effect is tiny.
Practical Significance vs Statistical Significance
Statistical significance means the result is probably real. Practical significance means the result matters for your business. A 0.1% conversion improvement might be statistically significant but not worth implementing.