square-poll-horizontalResults Analytics Docs

Reading Your Results Dashboard

Where to find your experiment data and how to interpret it.

Every running, paused, and ended experiment has a Results tab. This is where you see how each test group is performing — and ultimately, where you make the decision on what to implement.

Accessing results

  1. Open AB Genius in your Shopify admin.

  2. Go to the Tests page.

  3. Click the chart icon next to any experiment, or open the experiment and navigate to the Results tab.

What you'll see

The Results dashboard is split into several sections:

  • Key Metrics Panel — A side-by-side comparison of your test groups across the core performance metrics. You can select any two groups to compare directly.

  • Time Series Chart — A daily breakdown of visitors and conversions per group over the experiment's lifetime. Use the date range picker to zoom into specific periods.

  • Audience Breakdown — Tabs that split your results by different dimensions (more on this below).

  • Win Banner — When one group is statistically ahead, a banner appears showing which group is winning and by how much.

The metrics you're comparing

For each pair of groups, the dashboard shows:

  • Conversion Rate — purchases divided by product views, shown as a percentage.

  • Revenue per Visitor (RPV) — total revenue divided by assigned visitors, shown as a dollar amount.

  • Average Order Value (AOV) — total revenue divided by number of orders.

  • Add to Cart Rate — add-to-cart events divided by product views.

  • Abandoned Cart Rate — the percentage of checkout starts that didn't result in a purchase.

Each metric also shows the percentage difference between the two selected groups and a monthly impact projection.

How to read the comparison

The dashboard always compares two groups at a time. If your experiment has more than two groups, use the group selector to switch between comparisons.

Green numbers indicate the selected variant is outperforming the control. Red numbers indicate underperformance. The percentage difference tells you by how much.

Focus on your primary metric (the one you selected in the Metrics tab during setup). This is the metric AB Genius uses to suggest a winner, and it should align with the business question you're trying to answer.

Understanding Statistical Significance

When your data is reliable enough to act on.

Statistical significance is the confidence level that the difference between your test groups is real — not just random noise from day-to-day traffic variation.

Why it matters

If you run a price test for two days and Variant A has a 3.2% conversion rate vs the Control's 2.8%, that might look like a win. But with only 50 visitors per group, that difference could easily be random chance. Run it for two more weeks and the numbers might flip.

Statistical significance tells you: "Given the amount of data we have, how confident can we be that this difference is real?"

How AB Genius calculates it

AB Genius runs significance calculations automatically in the background. You don't need to do any manual statistics. The app compares conversion rates and revenue metrics between groups and calculates whether the observed difference is likely to hold over time.

When significance is reached, the Win Banner appears in the Results tab, showing which group is ahead and on which metric.

How to use it

Don't act before significance is reached. If the Win Banner hasn't appeared, the data isn't conclusive yet. Making decisions on inconclusive data is the single most common A/B testing mistake.

Don't chase early leads. It's normal for one group to appear ahead in the first few days, only to even out over time. Early data is the least reliable data.

Minimum thresholds to aim for:

  • At least 100–200 unique visitors per group

  • At least 1–2 weeks of runtime

  • Enough conversions per group to form a pattern (ideally 20+ per group)

If your store gets low traffic, experiments will take longer to reach significance. That's normal — it doesn't mean the test isn't working. It means you need more data.

What if significance is never reached?

If you've run the experiment for several weeks with meaningful traffic and significance still hasn't been reached, that's actually useful information. It likely means the difference between your variations is small — possibly too small to matter.

In this case, you can either end the test and implement whichever version you prefer operationally, or redesign the experiment with a bigger variation (a more meaningful price change, a more dramatically different headline, etc.).

Declaring a Winner

How to end an experiment and lock in the result.

When one test group is consistently outperforming the others and statistical significance has been reached, it's time to declare a winner.

How to declare a winner

  1. Go to the Results tab of your experiment.

  2. When the Win Banner appears, it will show which group is leading and by what percentage.

  3. Click Declare Winner.

  4. Select the winning group.

  5. Confirm.

This action:

  • Locks in the winner in the database permanently.

  • Changes the experiment status to Ended.

  • Stops all new visitor assignments.

  • Records the winning group and the metrics at the time of declaration.

This action cannot be undone. Once a winner is declared, the experiment is permanently closed.

What happens after declaring a winner

AB Genius does not automatically apply the winning variation to your store. This is by design.

  • If the winning variation was a price change, you need to go into your Shopify admin and manually update the product price to the winning price.

  • If the winning variation was a content change, you need to update your theme or page content to reflect the winning copy.

  • If the winning variation was a different landing page (page split test), you need to redirect your traffic to the winning page or replace the original page.

Why doesn't AB Genius apply the winner automatically?

Because permanent store changes should be deliberate. An experiment tells you what works better — but the decision to implement it permanently is yours. You might want to discuss it with your team, apply it as part of a broader update, or run a follow-up test before committing.

Can I end an experiment without declaring a winner?

Yes. Click the three-dot menu on the experiment and select End. This stops the experiment without recording a winning group. Your results data is preserved — you can still review the analytics. You just won't have a formal winner on record.

Audience Breakdown

See how your experiment performs across different visitor segments.

The Audience Breakdown section in the Results tab lets you slice your results by multiple dimensions. This helps you understand whether a variation works universally or only for specific segments.

Available breakdown tabs

  • All Visitors — The overall performance across all assigned visitors. This is your headline result.

  • Desktop / Mobile — Results split by device type. A variation might win on desktop but lose on mobile (or vice versa). This is one of the most common and important breakdowns to check.

  • New / Returning Visitors — Results split by visitor type. New visitors and returning visitors often respond very differently to pricing and messaging changes.

  • Source Channels — Results split by traffic source: Direct, Organic Search, Paid Search, Organic Social, Paid Social, Email, Referral. If a variation performs well on paid traffic but poorly on organic, that's critical information for your ad spend decisions.

  • Source Sites — Results split by specific referrer domain. Useful for understanding which referring sites are sending the most engaged traffic.

  • Top 10 Countries — Geographic performance breakdown. If you sell internationally, price sensitivity and messaging effectiveness vary significantly by market.

  • Browsers — Results split by browser (Chrome, Safari, Firefox, etc.). Occasionally useful for diagnosing technical issues — if a variation underperforms on a specific browser, it might indicate a rendering problem rather than a messaging problem.

How to use audience breakdowns

Don't just look at the "All Visitors" tab. The aggregate result can mask important segment-level differences.

For example: your headline test might show a +5% lift overall. But the breakdown reveals it's actually a +15% lift on mobile and a -3% loss on desktop. The overall average hides the fact that you should implement the change on mobile only.

Similarly, a price test might show a neutral result overall, but the country breakdown reveals it's a strong winner in the US and a strong loser in the UK — suggesting you should price differently by market.

Important caveat

The more you segment your results, the smaller your sample size per segment becomes. A result that looks significant at the "All Visitors" level might not be significant when you slice it by country or device. Be cautious about making decisions on small segment-level samples.

Monthly Impact Projections

What the estimated impact numbers mean — and what they don't.

In the Results tab, each metric comparison includes a monthly impact projection. These numbers estimate how much additional revenue, conversions, or add-to-carts you could expect per month if the winning variation's lift held constant.

How projections are calculated

Each metric uses a formula based on the observed lift and your store's session volume:

  • Conversion Rate projection: The percentage difference multiplied by your sessions, multiplied by a conversion factor of 0.25. This gives an estimated number of extra conversions per month.

  • Revenue per Visitor projection: The percentage difference multiplied by your sessions, multiplied by a revenue factor of 3.85. This gives an estimated extra revenue per month.

  • Add to Cart Rate projection: The percentage difference multiplied by your sessions, multiplied by an engagement factor of 0.4.

  • Abandoned Cart Rate projection: The absolute percentage difference multiplied by your sessions, multiplied by a factor of 0.15.

How to interpret them

These projections are estimates, not guarantees. They assume:

  • Your traffic volume stays consistent.

  • The measured lift holds over time.

  • External factors (seasonality, promotions, market changes) remain constant.

Use them as directional guidance — "this change could be worth roughly $X per month" — rather than precise forecasts. They're most useful for prioritising which winning variations to implement first.

Common questions

chevron-rightWhy does the projection say "EST. +$7 MONTHLY" for a 180% lift?hashtag

The projection is based on your actual traffic volume during the experiment. If you had very few visitors, even a large percentage lift translates to a small absolute number. As your traffic grows, the projected impact of the same percentage lift grows too.

chevron-rightCan I rely on projections for financial planning?hashtag

No. Use them for relative prioritisation — "this test is likely more impactful than that test" — not for budgeting or forecasting. Implement the winner, monitor your actual metrics, and measure the real impact.

Custom Metrics (Pro Plan)

Track the specific KPIs that matter to your business.

The standard metrics in AB Genius — conversion rate, revenue per visitor, AOV, add to cart rate, and abandoned cart rate — cover the fundamentals. But your business might have specific metrics that matter more.

Custom Metrics on the Pro plan let you go beyond the defaults.

Available custom metrics

In the Metrics tab of your experiment, you can select which metrics to track and which one to use as your primary success metric.

Available options include:

  • Visitors, Orders & Revenue — overall traffic and sales performance across all test groups

  • Conversion Rate — percentage of visitors who complete a purchase

  • Revenue per Visitor — average revenue generated per unique visitor, excluding discounts

  • Average Order Value — net revenue divided by total number of orders

  • Add to Cart Rate — percentage of visitors who add a product to their cart

  • Abandoned Cart Rate — percentage of checkout starts that don't convert to purchases

  • Profit per Order — available when cost data is configured

  • Average Units per Order — average number of items per order

Choosing your primary metric

Your primary metric is the one AB Genius highlights in the Results tab and uses to generate the Win Banner. Choose the metric that best represents the business question you're answering.

  • For price tests: Revenue per Visitor is usually the best primary metric. It captures both conversion rate and order value in a single number. A higher price might lower conversion rate but increase revenue per visitor — and RPV tells you the net effect.

  • For content tests: Conversion Rate is often the right choice if you're testing messaging designed to drive more purchases. If you're testing upsell messaging or bundle offers, AOV or Revenue per Visitor might be more relevant.

  • For page split tests: Conversion Rate gives you the clearest read on which page design drives more purchases.

Custom metrics on the Free plan

The Free plan includes the standard metric set. Custom metric selection and the ability to designate a custom primary metric are available on Pro only.

Need help interpreting your results? Contact us at info@abgenius.io