Skip to main content

How to Run Split Tests Like a CRO Expert

Before you start implementing A/B split tests, here’s a few secrets that only seasoned split testers are privy to.

1. Run an A/A/B/n test

Most businesses and CRO agencies (not all) that split test will only test their variation against the control (A/B/n). This is fine until someone or something casts doubt on the data. Perhaps a ‘cup half empty’ colleague that doesn’t believe the size of a win. Or an unexpected loss that you don’t understand.

The logical next step here is to check the configuration of the testing platform and run the test again. But there’s a better way.

To save you the time and effort of having to run the test again, always make a habit of running an A/A/B test.

A = Your control page

A = Your control page

B = Your ‘challenger’ or treatment page.

By doing so, you prove the data as you go. If there’s a problem with the data, you’ll see it quickly. You should see that the two A variations never reach statistical significance either way, and the longer you leave the test, the nearer to 0% uplift you should get.

If one of the A variations reaches statistical significance against the other, you have a problem which needs investigating and correcting before you run the test again.

2. If a split test loses, make sure you know why!

There’s nothing worse than running a test that unexpectedly loses and having no idea why. You won’t like it. Your team won’t like it. Your boss won’t like it either.

If only there was a way to know exactly why a test lost, so you could go straight back to the drawing board, make a few tweaks and turn it in to the winner you expected.

Here’s how you can do exactly that…

Step 1) Create two versions of a Qualaroo survey asking a question that would reveal the reason why a user didn’t perform the action you were expecting.

Something like: “Hi, quick question before you go, if you didn’t [fill in the expected action here], what stopped you? Thanks!”

Step 2) To make sure you don’t skew the test results by putting the survey on just the challenger page, make sure you create two separate but identical surveys and place one on the control too.

So, you’ve implemented your split test, and you’re beginning to see results. Here’s a couple of rookie errors even the ‘CRO experts’ make sometimes.

1) Don’t pay attention to the initial results.

Over the first few days rollercoaster gyrations from positive to negative (and back again) are normal. If you check in too early your emotions will be on that rollercoaster with them.

2) Don’t stop the test too early.

Your testing platform may declare your variation is a statistically significant winner before an adequate sample has been through the test. Often a test will show significance, and then drop back again, which is normal. It’s a flaw in the way the software companies choose to display results. Actually, it wasn’t significant in the first place.

As a general rule of thumb, try not to call a test before:

  • You have at least 100 conversions for the winning variation.
  • The test has run for at least 2 weeks.

To make absolutely sure your result is statistically significant, calculate your minimum sample size before you run the test, and stick to it no matter what.

3) Don’t read the test results wrong, it can make you look silly when the results don’t follow through to the bottom line.

A common mistake is to see the % uplift, and use that number to calculate the impact on the bottom line without taking in to account the margin for error.

In the example below the test shows a 19.2% uplift, with a 96.1% confidence level.

Advanced guide image tools blog

‘A snapshot of a split test result from the Optimizely dashboard’

This does NOT mean there’s a 96.1% chance you got a 19.2% uplift.

It does mean that there’s a 96.1% chance your variation is better, and not worse, than the control page. And that during the test, the software observed an average uplift of 19.2%.

It also shows there was a deviation from the average of +/- 0.16% for the control, and +/- 0.17% for the variation.

So when calculating how much money this might make for the business, you should take these in to account and express the win like this:

“There’s a 96.1% confidence level that the variation is better than the control. During the test we saw a conversion rate of between 0.94% to 1.26% for the control and 1.13% and 1.3% for the variation. Taking the mean for each variation, and if all other variables stay the same, the conversion increase would be 19.2% which would be worth x in annualised revenue.”

So now you’ve learnt the most common mistakes most people make when running a split test, how to read a split test result like a CRO expert, and how to turn a losing split test in to a winner.

Is your CRO programme delivering the impact you hoped for?

Benchmark your CRO now for immediate, free report packed with ACTIONABLE insights you and your team can implement today to increase conversion.

Takes only two minutes

If your CRO programme is not delivering the highest ROI of all of your marketing spend, then we should talk.