Skip to main content

Kickstarting and keeping the A/B testing momentum

A/B tests are run to learn how specific changes and features affect users’ experience, satisfaction, system performance etc. Some teams, products and companies run thousands of A/B tests every year.   

Lukas Vermeer, formerly Director of Experimentation at Booking.com and now at Vistaprint, spoke at our Experimentation Works conference. In both his roles, he has helped people in the organisation to run experiments in order to make better decisions, and make better products

To experience the full benefit of A/B testing, organizations need to evolve both technically and culturally, and this needs to happen in an iterative way. One of the best ways to describe this iterative path – consisting of a step-by-step evolution from the Crawl Phase, through the Walk Phase, and into the “Fly” stage – is by using a Flywheel.

In his talk, Lukas discusses the flywheel model that helps companies drive their evolution towards a stronger data-driven experimentation culture.

Lukas Vermeer, Director of Experimentation at Vistaprint

Below we have summarised the key points from Lukas’ talk and cover the stages of the A/B testing flywheel, enabling you to kickstart and keep the A/B testing momentum in your business. Keep reading to learn more or watch the full video above.

The A/B Testing Flywheel

The A/B testing Flywheel is based on the Experimentation Growth Model paper co-written by Lukas Vermeer alongside Aleksander Fabijan, Pavel Dmitriev, and Colin McFarland. The paper abstract reads, “This four‐stage model addresses the seven critical aspects of experimentation and can help companies to transform their organizations into learning laboratories where new ideas can be tested with scientific accuracy. Ultimately, this should lead to better products and services.

The A/B testing flywheel incorporates the four phases of the Experimentation Growth Model (Crawl, Walk, Run, Fly) and builds on them, creating the five step flywheel pictured above to assess what maturity companies have when it comes to experimentation. 

Feeding any part of the flywheel accelerates the loop, speeding up the growth of A/B testing culture in an organisation. Conversely, lack of investment in any one of these areas will slow down or stop the growth of A/B testing. Thus, the recipe for successfully growing the culture of A/B testing is simple: push the flywheel, accelerate momentum, then repeat.

Essentially, we believe that the objective of running more A/B tests is to help make better  decisions. That is what we ultimately want. If we run more tests to help facilitate better decisions by consistently reporting on their value, then the more tests we run, the more value is demonstrated.

If we can demonstrate value and therefore increase interest in A/B testing, then we can start investing more in A/B testing. If we can invest more in A/B testing, we can focus attention on automating processes that are creating friction in the cycle, taking some of the work out of your data scientists hands and therefore lowering human cost

And finally if we can lower human cost, then we can support more decisions, demonstrate more value and make more investments – and so the cycle repeats. 

  1. The First Turn

The first phase is the first turn. So, any physical, mechanical flywheel will have some inertia to get moving. So, we have to make that first turn in order for that inertia to go away. But then once it is moving, you have to think about it in terms of adding more push or removing more friction to make it move more quickly. And so we distinguish between these two phases. We have the ‘first turn’ and then we have the ‘make it spin’ phase which comes once the flywheel is turning.

  1. Measuring Value

When you think about measuring value, then we shouldn’t just be thinking about business value. Ideally that’s where we end up, but for this initial phase it’s actually more productive to start looking for what is called a counterintuitive result. Something that people, or at least some people, did not expect.

This makes it more likely that we will find something that increases interest in A/B testing because it’s counterintuitive to some. At the same time we also make it easier to actually measure the thing that we are interested in for this particular experiment. So find something where there is some disagreement.

  1. Increasing Interest

This counterintuitive result kickstarts the interest in experimentation in the right way, because it helps people to realise that experimentation is not just about finding winners, but it’s also about learning about the world. 

This result hopefully also sparks interest in the scientific method in general. It makes people think about what these experiments are supposed to be adding in the first place. 

So make sure that you take that counterintuitive result and you then bring it back to everyone who was involved in picking that experiment. Highlight the fact that not everyone could have predicted that something else would have happened in this particular case.

  1. Investing in Testing Infrastructure and Technology

When it comes to investing in testing infrastructure, you might not want to build a fully-fledged experimentation platform right from day one, as investing in such a platform can be a very expensive exercise. 

Some companies might decide that they don’t want to build their own. They might buy their testing infrastructure off the shelf, and luckily there’s lots of great solutions and companies on the market. 

You want it to be reliable, robust and able to start quickly so that you will have something tangible. The first experiment should give you numbers that you trust and should give you those numbers quickly.

  1. Lowering the Costs

Now, when you have invested in this initial A/B infrastructure, you can already start thinking about how you would lower the human cost involved. At this stage, you’re probably relying heavily on data scientists to do a lot of the work.

Now you can start thinking about automating some of these checks. For example: sample ratio mismatch tests. These can be easily automated, so you’re removing the task out of the hands of the data scientists and thereby lowering the human cost. This will be helpful in the next iteration. 

You might find that some of these things are very specific to your product or to your infrastructure. 

So think carefully about ‘Where are my data scientists spending the most time?’ or ‘Where are my developers spending the most time?’ And ‘Where can I remove most of the friction out of the process so that the next turn will go faster?’

  1. Making it Spin

Speaking of that next turn, this is called making it spin. And this is the step where now we have run an experiment. We have gone through all the steps, but now we want to continue to invest in each one of them, making sure that we’ve removed friction as we go along. 

This is how we keep the A/B Testing momentum going.

Watch The Full Video Above To Find Out More About How To Kickstart and Keep Your A/B Testing Momentum Going!

Is your CRO programme delivering the impact you hoped for?

Benchmark your CRO now for immediate, free report packed with ACTIONABLE insights you and your team can implement today to increase conversion.

Takes only two minutes

If your CRO programme is not delivering the highest ROI of all of your marketing spend, then we should talk.