Common A/B testing mistakes
Mistakes that waste time and lead to bad decisions.
Ending tests too early
Early data is noisy.
A “20% lift” on day two can become “2%” by week two.
Fix: Wait for statistical confidence.
As a minimum baseline, run 1–2 weeks and aim for 100–200 visitors per group.
Testing too many things at once
If you change headline, images, pricing, and layout, you can’t learn what caused the result.
Fix: Isolate one meaningful variable per experiment.
If you want to test an entirely different page, use a page split test.
Testing trivial changes
Button colours, font size tweaks, and icon swaps rarely produce a measurable effect.
Fix: Test pricing, offer structure, messaging, and page layout.
Ignoring revenue per visitor
Conversion rate alone can trick you.
Lower prices often raise CVR while lowering revenue.
Fix: Use revenue per visitor as the primary metric for price tests.
Not running full weekly cycles
User behaviour changes across the week.
Fix: Run tests in full weekly cycles: 7, 14, 21 days.
Implementing losers because of sunk cost
A new page can lose, even if it took days to build.
Fix: Trust the data.
A losing variant is still a valuable learning.