list-checkTest Management Extended Docs


Mutual Exclusion

Preventing visitors from being in conflicting experiments at the same time.

Mutual exclusion is the concept of ensuring that a single visitor isn't assigned to multiple experiments that could interfere with each other. Understanding how AB Genius handles this is important for maintaining clean test results.

How AB Genius handles multiple experiments

AB Genius assigns visitors to each experiment independently. If you have three experiments running, a single visitor could be assigned to a group in all three.

This is fine when the experiments target different elements — for example, a price test on Product A, a content test on the homepage, and a page split test on a landing page. These experiments don't interact with each other, so there's no conflict.

When mutual exclusion matters

Conflicts arise when two experiments modify the same thing:

  • Two price tests on the same product. Both experiments will try to set a price for the same product. The second assignment may overwrite the first, producing unreliable data for both tests.

  • Two content tests modifying the same element on the same page. If both experiments target the same headline, the second modification may overwrite the first.

  • A page split test and a content/price test on the same page. If a visitor is redirected by the page split test, they leave the original page entirely — so the content or price test on that original page never applies. But if they're in the control group of the split test (staying on the original page), both experiments could apply.

How to manage mutual exclusion

AB Genius does not currently enforce automatic mutual exclusion between experiments. This means it's your responsibility to ensure you don't run conflicting experiments simultaneously.

Practical rules to follow:

  • One price test per product at a time.

  • One content test per element per page at a time.

  • Don't run content or price tests on the control page of an active page split test unless you understand the interaction.

  • If in doubt, run experiments sequentially rather than in parallel.

Using targeting to create manual exclusion

If you need to run two experiments on the same page for different audiences, you can use audience targeting to create non-overlapping segments:

  • Experiment A targets mobile visitors only.

  • Experiment B targets desktop visitors only.

Since the targeting criteria don't overlap, no single visitor can be in both experiments. This is a manual form of mutual exclusion.

Best practice

Keep a simple record of which experiments are running on which pages and products. Before launching a new experiment, check for overlap. This takes 30 seconds and prevents weeks of corrupted data.


QA Checklist

A complete checklist to verify your experiment before going live.

Use this checklist before launching any experiment. It covers all three test types and will catch the most common setup issues.

Universal checks (all test types)

Price test checks

Content test checks

Page split test checks

Post-launch checks (first 24 hours)


Scheduling Tests

Set experiments to start automatically at a specific date and time.

Instead of manually starting an experiment, you can schedule it to launch automatically. This is useful when you want to coordinate test launches with campaign launches, promotions, or team availability.

How to schedule an experiment

1

Create and complete setup

Create your experiment and complete all setup (groups, prices/content, targeting, metrics).

2

Leave in Draft

Leave the experiment in Draft status — do not start it manually.

3

Open schedule menu

Click the three-dot menu on the experiment in the Tests list.

4

Select Schedule

Select Schedule.

5

Set date and time

Set the date and time you want the experiment to start. The timezone is based on the setting detected during your onboarding.

6

Confirm

Confirm.

The experiment status will change to Scheduled. At the specified time, it will automatically switch to Running and begin assigning visitors.

Cancelling a scheduled experiment

1

Find the scheduled experiment

Find the experiment in the Tests list (it will show a Scheduled status badge).

2

Open the three-dot menu

Click the three-dot menu.

3

Cancel the schedule

Select the option to cancel the schedule.

The experiment will revert to Draft status. You can then reschedule it, start it manually, or edit it further.

Plan requirements

Scheduling is available on the Pro plan only. Free plan experiments must be started manually.

Tips

Schedule launches for the start of the day. If your store gets most of its traffic during business hours, scheduling an experiment to start at midnight means you capture a full day of data from day one.

Coordinate with ad campaigns. If you're launching a new ad campaign and want to test the landing page, schedule the experiment to start at the same time as the campaign.

Don't forget to complete QA before the scheduled time. Once the experiment auto-starts, it's live. Run through the QA Checklist well before the scheduled launch.


Pausing Tests

Temporarily stop an experiment without losing data.

Sometimes you need to pause an experiment — maybe you're running a flash sale and don't want it to interfere with test results, or you've spotted an issue that needs fixing before the test continues.

How to pause an experiment

1

Open Tests page

Go to the Tests page.

2

Find the running experiment

Find the running experiment.

3

Pause the experiment

Click the Pause button (the play/pause icon) next to the experiment.

The status changes to Stopped. No new visitors will be assigned to test groups. Existing data is fully preserved.

What happens when an experiment is paused

  • No new visitor assignments. Visitors arriving on the page will see the original store experience — as if no experiment exists.

  • Existing data is preserved. All visitor assignments, events, and results collected before the pause remain intact.

  • Previously assigned visitors are unaffected. If a visitor was assigned to a group before the pause and returns during the pause, their cached assignment (ab_data cookie, 1 hour) may still show the variant. After the cache expires, they'll see the original experience until the experiment resumes.

How to resume

1

Open Tests page

Go to the Tests page.

2

Find the paused experiment

Find the paused experiment (status: Stopped).

3

Resume

Click the Play button.

The experiment returns to Running status. New visitors will be assigned to groups again, and data collection continues from where it left off.

When to pause vs end

Pause when you need a temporary stop. You plan to resume the experiment once the disruption is over (sale ends, issue is fixed, holiday period passes).

End when you're done with the experiment permanently. You've seen enough data, or the experiment is no longer relevant.

The key difference: paused experiments can be resumed. Ended experiments cannot.

Impact on results

Pausing an experiment introduces a gap in your data. The time series chart in the Results tab will show a period with zero visitors during the pause. This doesn't invalidate your results, but be aware that the gap could coincide with different traffic patterns (e.g. if you paused over a weekend).

If possible, avoid pausing experiments mid-week. Pause at the end of a full weekly cycle and resume at the start of the next one to keep your data clean.


Implementing Winning Tests

How to apply your winning variation permanently — and what to watch for.

Declaring a winner in AB Genius is not the end of the process. It's the signal to implement the change on your live store. Here's how to do that for each test type, and what to monitor after implementation.

Implementing a price test winner

1

Open Shopify Products

Go to your Shopify Admin → Products.

2

Find products

Find the product(s) included in the experiment.

3

Update price

Update the product price to the winning test price.

4

Set Compare-At price

If the winning variant included a Compare-At price, set that in Shopify as well.

5

Save

Save.

The winning price is now your live price for all visitors. No more experiment — this is the permanent change.

Implementing a content test winner

1

Identify modifications

Identify exactly which modifications the winning variant included. Go to the experiment's Modifications tab for reference.

2

Open theme editor

Open your Shopify theme editor (Online Store → Themes → Customise).

3

Make changes in theme

Make the changes directly in your theme — update headlines, product descriptions, button text, section ordering, or whatever the winning variant modified.

4

Add custom CSS

If the winning variant used custom CSS, add that CSS to your theme's stylesheet or a custom CSS section.

5

Save and publish

Save and publish.

Implementing a page split test winner

If the challenger page won:

Option A — Replace the original. Rebuild the original page to match the winning challenger design. This keeps your URL structure clean.

Option B — Redirect. Set up a permanent redirect from the original URL to the winning page URL. This is faster but means maintaining a separate page.

Option C — Swap URLs. If both pages are product pages, you might update the original product page to match the challenger's design and content.

The right approach depends on your store's URL structure and SEO considerations. If the original URL has SEO authority and backlinks, Option A (rebuilding in place) or a 301 redirect is usually best.

What to monitor after implementation

Implementing a winner isn't "set and forget." Monitor your store's key metrics for 1–2 weeks after the change:

  • Conversion rate — Does the overall store conversion rate reflect the lift you saw in the test?

  • Revenue per visitor — Is the revenue improvement holding at the expected level?

  • No unexpected drops — Sometimes a change that wins in a test causes unintended issues when applied to all traffic (e.g. a price increase that works for US visitors but hurts international conversion). Watch for segment-level drops.

  • Customer feedback — If you implemented a price increase, monitor customer support inquiries for any pricing-related concerns.

Compounding wins

Once you've implemented a winner, that becomes your new baseline. Your next experiment should build on top of it.

For example:

  • Test 1: Price increase from $39 to $44. Winner: $44.

  • Test 2: Now test $44 vs $49 — can you go higher?

  • Test 3: Test new headline copy on the page with the winning $44 price.

Over time, compounding small wins produces significant cumulative improvements. This is how professional CRO programs work — not one big test, but a sequence of experiments that systematically improve performance.

Common mistake: not implementing

The most common mistake isn't implementing wrong — it's not implementing at all. Merchants run a test, see a winner, and then get distracted and never make the change.

A winning test that isn't implemented is worth nothing. Put implementation on your task list the same day you declare a winner. The longer you wait, the more revenue you leave on the table.