First, let me explain this formula - for us these are 3 power metrics to monitor in a CRO programme: ‘W’, ‘E’ and ‘V’.
CRO success or ‘S’ is defined by these three key variables, two of which you can only influence, and the other that you can directly control.
W is win rate; the percentage of your split-test results that show a positive uplift during the programme.
E is the effect observed during testing, e.g. a 5% increase in conversion rate, a 12% increase in Revenue Per Visitor - whatever the metric that you have chosen that best represents a desirable business outcome for you.
V is for velocity, the number of split-tests you launch in say a month.
By using correct analytics data, talking to your users, being led by the evidence and not relying on your preconceived notions, you can influence both the win rate and the effect. Do this well and you’ll avoid two pitfalls when it comes to testing - RATS and meaking.
The two pitfalls of testing
Good quality research stops you from Random Acts of Testing (also known as RATS). This is where you are just trying anything that might work, because you don’t have a body of solid evidence that is pointing you in a particular direction.
The other danger is meaking, a term originally coined by Karl Blanks. It means meak tweaking. So many times I have talked to people who are underwhelmed by the results they have been getting from their CRO efforts, and it’s down to this; they have read or heard that a particular layout or a specific button colour works better than others and off they go testing this.
Often the test is simply a tweak to their current site and it may be a simple UX change, but because it doesn’t address or engage with the internal sales conversation that is happening in the head of the user, the result, if it is positive, is a bit meh.
The danger of this is that if these meh results keep happening, it can start to undermine the programme’s credibility and people soon lose interest.
So which variable can I directly control?
So, while you can influence the win rate and the effect by being evidence-led, what you can directly control is the velocity of your split-tests; how many you can launch in a given time period, and how many variations you are testing at any one time.
Sometimes your results won’t be particularly impressive, but you can counter this by increasing the volume of tests. In this case it’s not just about increasing the volume, but also how quickly you are learning, and feeding this new knowledge into the process.
If you think back to Dave Brailsford and the culture he embedded into British Cycling. His success involved looking for small things that could make a difference, from the shape of the helmet, to ensuring the athletes avoided colds and infections and had their own pillow with them to help them sleep.
All these things he tried added to a massive gain over time, but it was the sheer volume of things he tried and the culture of experimentation that he encouraged.
So if you are running 2 tests per month, ask yourself what would I need to be launching 4 tests per month. If you are running 4 tests per month, what do I need to do to launch 8 tests, and so on. Doubling your velocity will get an exponential increase in the success of your programme.
To make sure you are testing as fast as you can, have a clear policy on culling under-performing tests. With one of our clients, we have a strict policy that a test has just 14 days to mature into a win or we’ll kill it and move on. You have to be dispassionate about this, because there is always an opportunity cost to testing.
Interested in reading more?
This is mistake number five in a series of seven posts, outlining the mistakes that almost everyone makes in conversion optimisation. Keep an eye out for the remaining two mistakes over the next few months.
Mistake #3: Are you assuming your GA data is accurate?
Read our ebook below to learn more about how the three power metrics - win rate x effect x velocity - can make your conversion programme a success.