The following is an excerpt adapted from ‘A/B Testing: How To Get Really Big Sales Uplifts’, our free whitepaper that shows you how to achieve solid results through split testing.
Declaring a split test win is a great feeling.
That feeling is even better when you put the test page live on your main site and start to see a real impact on the bottom line for the whole of your e-commerce business.
However, research conducted by Qubit suggests that most A/B tests are flawed. Sometimes the test may look like a success, but, statistically, it just wasn’t valid.
So what are the common mistakes people make when split testing and how can you avoid them? Below we’ve shared 5 ways to make sure your split test programme is successful.
When testing for conversion rate uplifts, one way of staying safe is to use a duration calculator, like the one provided free by Visual Website Optimizer (VWO).
Determine your test duration in advance and don’t draw any conclusions until the full period has been completed. This should ensure that you take in a long enough period to include multiple business cycles and that the statistical confidence levels can be trusted.
There are many metrics you can look at to evaluate your split test.
For e-commerce sites, our preference is to track Revenue Per Visitor (RPV) as the primary goal. Quite simply, this is total value of all orders divided by the number of visitors to the site.
We find this metric to be a useful benchmark, because it is solidly linked to revenue. This may come from customers spending more (Average Order Value) or more visitors buying (Conversion) or a combination of both. Whichever it is, your revenue improves.
We’ve often seen inexperienced split testers declare “wins” prematurely.
When you’re running split tests it’s tempting to take a peek at the unfolding graph while data is being gathered. But be wary. Split tests often bounce around between positive and negative before finally settling down. Resist the temptation to declare the result too soon!
Another common pattern is for a losing baseline to gradually catch up with the variation. Be especially cautious whenever you find an extremely high uplift in revenue. Let it gather more data to be sure, especially if the test had been exposed to low traffic. By declaring a result at the first sign of a positive increase, you’re fooling yourself and may harm your business.
If you have a large amount of traffic, it’s quite possible to get a statistically significant result within days. However, even though the numbers stack up, you should still be wary of normal day-to-day variations. For this reason, we always recommend running any test for at least two full weeks, including weekends.
Try to account for any regular patterns of change, such as mail shots and other marketing campaigns, by including a representative spread of time in your test.
In certain cases you may want to extend the duration of a test to include key seasons such as Christmas. The delay in further testing, learning and development may however outweigh benefits of increased certainty.
These are five key tactics we use to ensure that our split test results translate into higher sales when the winning variation is launched to 100% of visitors. Are there any that we’ve missed? What do you do to make sure your bottom line sees the benefit of your split test programme? Leave a comment in the box below.
Or, if you’re not currently running a split test programme, but are looking for some guidance on how to do so successfully, start with a free consultation.
Sign up to our newsletter and get all of the latest news straight to you.
If you’re serious about initiating change within your business, we’d like to offer you a 60-minute Initial Strategic Review.
“We’ll share what we’ve learned from decades of experience working with businesses using optimisation, innovation and experimentation to achieve business goals like yours”