CRO Best Practices
CRO Best Practices
What to Test First on Your Shopify Store
Stop guessing. Start with the tests that actually move revenue.
Most merchants install a testing tool and immediately want to test button colours or font sizes. That's the lowest-leverage place to start.
The biggest conversion gains come from testing the things that directly affect how much money a visitor is worth to your business. In order of impact, here's where to start.
Pricing
Pricing is the single highest-leverage variable in your store. A small price increase — even $2–5 on a product — can significantly increase revenue per visitor without meaningfully affecting conversion rate. And if it does affect conversion, the test will tell you.
Most brands set their prices once based on gut feeling and never revisit them. That's leaving money on the table.
First test to run: Take your best-selling product and test a 10–15% price increase against the current price. Measure revenue per visitor, not just conversion rate. A slight drop in conversion with a meaningful price increase often nets more total revenue.
Offer and messaging
After pricing, the way you present your offer is the next biggest lever. This includes:
Your headline and value proposition on the product page
How you frame discounts and promotions (e.g. "Save 20%" vs "Get $10 off" vs "Buy 2, Get 1 Free")
The urgency and scarcity messaging you use
Your product description — benefit-led vs feature-led
First test to run: Rewrite your best-selling product's main headline to lead with a customer benefit instead of a product feature. Test it against the original.
Social proof placement
Where and how you display reviews, ratings, testimonials, and trust badges affects how quickly a visitor builds confidence. Moving a review section higher on the page, or adding a star rating near the Add to Cart button, can have a measurable impact.
First test to run: If your reviews are buried below the fold, test a content variation that adds a summary star rating or a short testimonial near the top of the product page.
Page layout and structure
This is where page split tests come in. If you've built a new landing page or redesigned your product page template, test it against the original before rolling it out to all traffic.
First test to run: If you have a custom landing page for a paid traffic campaign, run a page split test comparing it against your standard product page. See which one actually converts better for that traffic source.
What NOT to prioritise
Button colours, font sizes, icon swaps, and minor layout tweaks. These changes rarely produce statistically significant results and consume testing time that could be spent on higher-impact variables.
Test the things that affect what the visitor buys, how much they pay, and whether they trust you enough to complete checkout. Start there.
How to Build a Testing Roadmap
A structured approach to experimentation — not random guessing.
Running A/B tests without a plan is like throwing darts blindfolded. You might hit something eventually, but it's not repeatable and it's not scalable.
A testing roadmap gives you structure. It tells you what to test, in what order, and why.
Step 1: Audit your funnel
Before you test anything, understand where you're losing visitors. Look at your Shopify analytics or Google Analytics and map out the conversion funnel:
How many visitors land on your product pages?
What percentage add to cart?
What percentage start checkout?
What percentage complete the purchase?
The stage with the biggest drop-off is where your first tests should focus.
If 60% of visitors leave the product page without adding to cart, your product page is the problem — not your checkout flow. If add-to-cart rate is strong but checkout completion is low, the issue is downstream (shipping costs, payment friction, trust at checkout).
Step 2: List your hypotheses
For each problem area, write down specific hypotheses about what might improve it. A good hypothesis follows this format:
"If we [change X], then [metric Y] will improve, because [reason Z]."
Examples:
"If we increase the price by $5 and add a Compare-At strikethrough, then revenue per visitor will increase, because visitors will perceive the product as premium."
"If we rewrite the headline to focus on the outcome rather than the product features, then add-to-cart rate will improve, because visitors care about results, not specifications."
Step 3: Prioritise by impact and effort
Not all tests are equal. Rank your hypotheses by:
Expected impact: How much could this move the needle? Pricing and offer tests typically have high impact. Minor copy tweaks have low impact.
Effort to implement: A price test takes 5 minutes to set up. A full page redesign for a split test takes days.
Traffic requirements: High-traffic pages give you results faster. Testing on a page with 50 visitors per week will take months to reach significance.
Start with high-impact, low-effort tests on high-traffic pages. Save the complex experiments for later.
Step 4: Run tests sequentially on the same page
Don't run multiple tests on the same element at the same time. Instead, run your highest-priority test first, collect the result, implement the winner, then run the next test.
Over time, this compounds. Each winning variation becomes the new baseline for the next test. A 5% lift followed by a 3% lift followed by a 7% lift compounds into meaningful revenue improvement.
Step 5: Document everything
For every test you run, record:
The hypothesis
What was changed
The result (winner, loser, or inconclusive)
The measured lift (or lack thereof)
What you learned
This creates an institutional knowledge base. Over time, you'll develop a clear picture of what works for your specific audience — not generic best practices from blog posts.
Common A/B Testing Mistakes
The mistakes that waste your testing time and produce bad decisions.
Ending tests too early
This is the number one mistake. You see an early lead after two days and declare a winner. But early data is noisy. What looks like a 20% lift on day two often evens out to a 2% difference by day fourteen.
Fix: Wait for statistical significance. At minimum, run for 1–2 weeks with 100–200 visitors per group. Let the data mature.
Testing too many things at once
If your variant has a different headline, different images, different pricing, and a different layout — and it wins — you have no idea which change drove the result. You can't replicate the learning.
Fix: Isolate your variables. Test one meaningful change per experiment. If you want to test a completely different page, that's what page split tests are for — but accept that the result tells you which page wins, not which specific element made the difference.
Testing trivial changes
Button colour tests, font size changes, and minor icon swaps almost never produce statistically significant results. They consume weeks of testing time and deliver nothing actionable.
Fix: Test the things that matter — pricing, offer structure, messaging, page layout. These are the variables that actually move revenue.
Ignoring revenue per visitor
Conversion rate is the most common metric people optimise for, but it can be misleading. A lower price almost always increases conversion rate — but it might decrease your total revenue.
Fix: Use revenue per visitor as your primary metric for price tests. RPV captures both conversion rate and order value in a single number. A test that slightly lowers conversion rate but significantly increases AOV often wins on RPV.
Not running tests for full weekly cycles
Traffic patterns vary by day of the week. If you start a test on Monday and end it on Thursday, you've missed the weekend — which might behave completely differently.
Fix: Always run tests in full weekly cycles (7, 14, 21 days). This ensures you capture the full range of your traffic patterns.
Implementing losers because of sunk cost
You spent a week building a new landing page. The test shows it loses. It's tempting to implement it anyway because of the effort invested.
Fix: Trust the data. The test exists to tell you the truth. If the new page loses, the old page stays. The time spent building the loser isn't wasted — you learned what doesn't work.
How Much Traffic Do I Need to Run a Test?
Realistic expectations based on your store's traffic volume.
A/B testing requires enough visitors to produce statistically reliable results. Here's how to think about it.
The minimum bar
As a general guideline, aim for at least 100–200 unique visitors per test group and 20+ conversions per group before making a decision. This isn't a hard rule — it's a practical minimum for results you can trust.
For a simple A/B test (two groups, 50/50 split), that means you need roughly 200–400 total visitors to reach the minimum threshold.
How long will that take?
It depends on your traffic and your traffic split.
If your product page gets 100 visitors per day and you're running a 50/50 split, each group gets ~50 visitors per day. You'd reach 200 per group in about 4 days. But you'd still want to run for at least 1–2 full weeks to account for daily variation.
If your product page gets 20 visitors per day, the same test takes 20 days just to reach the minimum visitor count — and you'd likely need even longer to accumulate enough conversions for significance.
Low-traffic strategies
If your store gets modest traffic, you can still run effective tests. You just need to be strategic:
Focus on high-traffic pages. Test on your homepage or your top-selling product page, not a niche product with 5 visitors per week.
Test big changes. Small changes (a word swap, a colour tweak) require huge sample sizes to detect. Big changes (a significant price increase, a completely different headline) produce larger effects that are detectable with fewer visitors.
Use fewer groups. A 50/50 split gives each group maximum traffic. Adding a third or fourth variant spreads your traffic thinner and extends the time to significance.
Be patient. If your traffic is low, tests will take weeks, not days. That's fine. A reliable result after three weeks is more valuable than a premature decision after three days.
When testing might not be practical
If your store gets fewer than 500 visitors per month total, traditional A/B testing will be very slow. At that traffic level, focus on qualitative research (customer surveys, session recordings, competitor analysis) to inform changes, and use A/B testing selectively for your biggest decisions — like a major price change.
Price Testing Strategy Guide
How to think about pricing experiments — not just how to set them up.
Price testing is the most valuable experiment type in AB Genius. But running a good price test requires more than just picking two numbers and seeing what happens.
Understanding what you're actually testing
When you run a price test, you're not testing "which price is better." You're testing the trade-off between conversion rate and order value.
A lower price will almost always convert at a higher rate. A higher price will almost always convert at a lower rate. The question is: at what price point does revenue per visitor peak?
Revenue per visitor = conversion rate × average order value.
This is the metric that matters. A 10% price increase that causes a 3% drop in conversion rate is a net win for your business.
Three types of price tests to run
Price increase test
Test your current price against a moderately higher price (10–20% increase). This is the simplest and often the most profitable test. Many brands are underpriced and don't know it.
Price anchoring test
Test adding or changing the Compare-At (strikethrough) price. For example, your current price is $39. Test showing "$59" as the Compare-At price with "$39" as the current price vs no Compare-At price at all. Anchoring can increase perceived value without changing the actual price.
Price reduction test
If you suspect your prices are too high and hurting conversion, test a lower price point. But measure revenue per visitor, not just conversion rate. If the lower price converts 20% better but the price is 30% lower, you're making less money per visitor.
Tips for effective price tests
Test meaningful differences. A $1 price difference on a $40 product is unlikely to produce a detectable effect. Test changes of at least 10–15%.
Include Compare-At pricing. If you're raising prices, pair the increase with a Compare-At value to maintain perceived value. This is often the real winner — the higher price with anchoring outperforms the lower price without it.
Run on your best sellers. High-traffic products give you faster results. And a price optimisation on your best seller has the highest revenue impact.
Measure revenue per visitor, not conversion rate. This is worth repeating. Conversion rate alone will almost always favour the lower price. Revenue per visitor tells you which price actually makes you more money.
Don't forget margins. AB Genius tracks revenue, but profit is what matters. A higher price with the same COGS means better margins per order. Factor that in when implementing your winner.
Want hands-on CRO guidance? Contact us at info@abgenius.io
FAQ / General
Does AB Genius Work With My Theme?
AB Genius works with any Shopify Online Store 2.0 theme. This includes all modern Shopify themes — Dawn, Sense, Craft, Ride, and all third-party themes built on the OS 2.0 framework.
The app is delivered through Shopify's Theme App Extension system, which is compatible with any theme that supports app blocks and app embeds.
If you're using a legacy (non-OS 2.0) theme, AB Genius may not function correctly. You'd need to upgrade your theme to a 2.0-compatible version. Most modern themes and all themes released after 2021 support OS 2.0.
For headless or custom Shopify storefronts (using the Storefront API or Hydrogen), the current implementation relies on Theme App Extensions and would require custom integration. Contact us at info@abgenius.io if you're running a headless setup.
Does AB Genius Affect SEO?
No. AB Genius experiments are invisible to search engines.
Experiments are delivered client-side via JavaScript after the page loads. Search engine crawlers (Googlebot, Bingbot, etc.) receive the original, unmodified page HTML — the same content they've always seen.
This means:
No duplicate content issues.
No changes to your indexed pages.
No impact on your search rankings.
For page split tests, the redirect happens via client-side JavaScript, which search crawlers typically do not follow. The control URL remains the indexed page.
If you have specific SEO concerns about your testing setup, contact us and we'll help you verify that your experiments are SEO-safe.
Can Visitors Tell They're in a Test?
No. Experiments are completely invisible to visitors.
Visitors are assigned to groups anonymously via a browser cookie. No consent banner, notification, or visual indicator is shown. The visitor simply sees one version of your storefront — they have no way to know that other visitors might be seeing a different version.
For price tests, the anti-flicker technology ensures the original price is never visible — the test price is applied before the page content becomes visible to the visitor. There is no "flash" of the original price.
The visitor's experience feels completely natural and consistent. If they return to your store, they'll see the same variation they were originally assigned to.
Does Ending a Test Apply the Winner Automatically?
No. Ending an experiment — or declaring a winner — does not change anything on your Shopify store.
When you end a test or declare a winner:
The experiment stops running.
No more visitors are assigned to test groups.
All visitors return to seeing the original store experience.
Results are preserved for your review.
To implement the winning variation permanently, you need to make the change yourself in your Shopify admin:
Price test winner: Update the product price in Shopify to the winning price.
Content test winner: Update the text, copy, or layout in your theme.
Page split test winner: Redirect traffic to the winning page or replace the original page.
This is intentional. AB Genius gives you data to make informed decisions. The permanent changes to your store are yours to implement when you're ready.
Can I Run AB Genius on Multiple Stores?
AB Genius is installed per Shopify store. Each store has its own AB Genius installation, its own experiments, and its own data.
If you manage multiple Shopify stores, you'll need to install AB Genius on each store separately. Each installation operates independently — experiments on one store do not affect another.
Each store requires its own subscription (Free or Pro). Plans are not shared across stores.
What Shopify Plans Does AB Genius Support?
AB Genius works with all Shopify plans: Basic, Shopify, Advanced, and Shopify Plus.
There are no Shopify plan restrictions. The app's features are determined by your AB Genius plan (Free or Pro), not your Shopify plan.
Can I Use AB Genius With a Custom Domain?
Yes. AB Genius works with your Shopify store's primary domain, whether that's your default .myshopify.com domain or a custom domain. No additional configuration is needed.
How Do I Contact Support?
You can reach our support team at:
info@abgenius.io
We help with experiment setup, troubleshooting, platform guidance, and CRO strategy questions. If you need deeper CRO support beyond the tool itself, let us know — we can connect you with CRO specialists.
More questions? Reach out at info@abgenius.io
