Have you got disillusioned with A/B testing for your website because you’ve tried it and found it didn’t work.
Your designers and developers spent ages coming up with beautiful new web pages to test. You set up the controlled experiment, showing half your visitors version A (your existing web page) and half version B (the shiny new variation).
The winner is the one that makes visitors get their wallets out, and you hoped it would be the ‘new improved’ version. Yet, in practice, your split tests are not delivering the results you expect and your customers steadfastly spent more when you showed them the original designs.
This is not a sign that your website is fine as it is and can’t be improved. Neither is it a reflection on the technique of split testing. It’s a sign that you’ve cheated out of the bigger prize that A/B testing can offer.
The secret of A/B testing success
Done properly, A/B testing is a route to huge sales increases and double-digit growth. The fact is that success from A/B testing is guaranteed – and like so many things in life, it’s simple when you know how.
Often, the reason why the new creative treatment comes back with a disappointing test result is lack of planning. When objectives are not defined at the beginning and the A/B test programme is run almost for the sake of it, or to test out some ‘best practice’ theory, it’s likely to be doomed from the outset.
Even copying something that worked on another website can be a recipe for disaster. There are so many factors coming in to play with every test that it’s impossible to assume that what worked for one site will work on another.
To guarantee success, A/B testing needs to be part of a Conversion Rate Optimisation (CRO) programme systematically following a few simple steps. This ensures that the new web pages have been developed in response to a genuine customer need, and designed holistically to really answer the issues your customers have with your existing pages.
The more time and effort that goes in to the planning stages, the higher your chances of producing a new web page that makes your visitors want to buy more.
How to stop your split tests not delivering the results you expect and guarantee success
Step 1: Deep dive analysis – get an insight into what’s really wrong
The first step - understanding your customers and visitors - is crucial. This is known as Insight Generation and helps you understand:
- Why your customers chose you
- Why they nearly didn’t choose you
- Where potential customers are leaving your site and, crucially
- Why those potential customers are leaving at that point
Insights can come from many sources, including research, and usability. However, it’s likely your business already has material just lying around that will give you a rich set of customer insights. Customer support emails and the knowledge acquired by customer service staff are just two such examples of valuable knowledge just waiting to be tapped into.
Step 2: Prioritising the insights and knowledge
Once you have gathered this information, spoken to your visitors and analysed your raw data, you will have a host of promising areas that you could think about testing. But which will give you the biggest results? How do you decide what to focus on first?
The way you prioritise at this stage is another key to success in A/B testing. At AWA we do this methodically using a system we call Triage™. (The term was originally used by medical staff at battlefields when they had to decide which cases deserved to receive treatment first.)
To find out which test might have the highest chance of improving sales, you simply look at two things:
- How likely it is to make an impact on sales?
- How strongly it is backed up by evidence?
It may sound obvious, but the ones to focus on first are the ones that look like they could have a big impact on sales AND where there is a lot of evidence to support that hypothesis.
Step 3: Planning your split tests
After Triage™, the next step is to simply create a list of the hypotheses you plan to develop into a new creative treatment. In the order in which you plan to split test them. We call this the Optimisation Plan.
It’s now time to put it into practice, but resist the temptation to go straight to design. This may be another reason why your previous A/B split tests may not have given you the stellar sales uplift you were hoping for.
Instead create a wirefame, and show it to the usability testers you spoke to in the early Insight Generation phase. Does it really address their issues? A wireframe helps you get honest feedback on the concept, before splashing out on fancy designs.
Step 4: Repeat, move on or learn?
Once the split test has run for long enough for the result to be statistically valid, you can review your Optimisation Plan.
The skill here is to decide whether to run the next test, or whether there are additional uplifts to be gained by tweaking the winning variation.
But even if the test did go badly, don’t give up. You can never fail with split testing, only learn. With the help of your usability testers, try to understand why and develop a new creative treatment. It could be just the insight you need so that your next test is the one that gives you a record-breaking increase in sales.
A/B split testing is a proven way to get solid growth for your online business – but only when you test the right things.
- Do the research and analysis so that you know what will change your visitor’s behaviour and make them want to buy
- Prioritise all your potential improvements by assessing each one for impact and evidence
- Create a plan and do the tests with the most potential first
- Get the most out of your test and then move on to the next
- Don’t give up on a disappointing result – find out why it bombed and learn. It could be the key to your best ever test.
What’s been the worst split test you’ve ever run? Have these tips inspired you to improve your split testing process? Please tell us about your experiences with A/B testing.
Read our ebook below to find out how a focus on three power metrics can double your CRO success.