TABLE OF CONTENTS


A/B Testing Best Practices


Some of the best practices for A/B testing are listed below:

  1. Duration: Run the A/B test for a minimum of 2 weeks and until the confidence level is reached

  2. Sample Size: Ensure that the sample size for each variation in your test is large enough to make your results valid. There are several online sample size calculators that are easy to use and help calculate the sample size along with the minimum detectable effect that you'd expect between the groups for a conversion metric. If you do not pre-calculate the required sample size of your test, then you may end up stopping the test too early before it collects enough data or running it for way too long and miss out on revenue

  3. The integrity of the test: Do not make changes to a journey when the test is running. It will skew the results and the data and lead to biased results

  4. What to test: Always try to test one feature at a time. Keeping everything else constant is the golden rule of A/B testing and it helps gauge the impact of that one feature that you have added/modified/removed

 

Setting up and Executing A/B testing             

  1. Choose A/B Tests from the top navigation bar.

  2. Click New Test.

  3. Enter a name for your test.

  4. Choose the audience that will be subjected to test from the drop-down.

  5. Select journeys to send traffic from the listing page and click Next.

  6. For each journey, toggle On and indicate the traffic split to set up A/B testing for multiple journeys. Traffic allocation across journeys must be equal to 100%. 

  7. Edit the following  test fields:

Test Field

Description

Goal

The purpose or goal of conducting A/B testing. Indicates the measurement criteria. Choose the measurement criteria from the drop-down. You can choose one of the following values:

  • Clicks on Recommendation 

  • Revenue

Note: If you assign traffic to no treatment (control) journey, then Clicks on Recommendation cannot be used as measurement criteria.

Metric

The metric to be calculated and analyzed based on the measurement criteria.

If you choose Clicks on Recommendation as measurement criteria, then you can choose the following metrics from the drop-down:

  • Click-Through Rate

  • User Engagement Rate


If you choose Revenue as measurement criteria, then you can choose the following metrics from the drop-down:

  • Revenue Per Visit

  • Average Order Value

  • Conversion Rate


You will be able to track all other metrics regardless of the KPI selected in the Performance Metrics Tab

Confidence in test results

Confidence allows you to define the required level of confidence with the A/B test result, thereby allowing you to make business decisions backed by statistics. 

The confidence % sets the threshold for the probability of a false positive. A 95% confidence means that there is a 5% chance for a false positive. 

Choose the confidence% for your A/B testing from the drop-down. The default confidence%, also considered as a best practice is 95%



  1. Click Publish.

Note: Click Save For later to save the testing configuration for future use.


V1.0 supports A/B testing only against All Visitors.

 

Pause A/B Test

If you choose to temporarily pause a live A/B test, in order to run a campaign or expose all users to another journey, you can do so by doing the following:


  1. Choose A/B Tests from the top navigation bar.

  2. Click to pause an A/B test.


Resume A/B Test

On resuming an A/B test, the journey served to the audience during the paused state is stopped and the A/B test is resumed.

Stop A/B Test

Once the A/B test is conclusive or you wish to end the test at any point, you can do so by doing the following:


  1. Click to stop an ongoing A/B test.


Note: It is recommended that the test is stopped only when confidence is reached and a winner is declared.

Restart A/B Test

If you wish to re-run a historic A/B test that was stopped, you can do so by duplicating it and starting a new test.



Viewing A/B test results