Do you know what confirmation bias is?
If you don’t, confirmation bias is when we look for data that supports our beliefs or the way we see the world.
In our view, ‘I know’ are two of the most dangerous words.
It’s a problem for a number of reasons.
Firstly, in the research phase of a CRO project when you are looking at data to better understand the experience of your visitors and customers, you will latch on to data that confirms a belief you already hold. What this means is that you may give greater weight to this evidence and less importance to data that refutes your belief.
Most seriously it could mean that once you have found evidence that aligns with your theories about your users you stop looking at other potentially interesting data sources. You literally stop researching and being curious, now comfortable in the knowledge that the world has proved that your thinking was right after all.
Confirmation bias is also a danger once you start testing. If you are testing one of your pet theories and it shows an extremely modest uplift then you are likely to interpret this result more favourably than one that doesn’t support your beliefs. You can become emotionally invested in your split-test result being a ‘win’ for you and your good judgement.
A second problem is even if ‘your’ version didn’t produce an outright positive result, you start looking for patterns in the data that suggest it is a ‘win’. You examine segment after segment trying to see the ‘win’, maybe it was a win for paid search traffic, for those that saw this page first, or those visitors on their fifth visit to the site. But as you look at these smaller segments for your win, there is even less chance that the ‘win’ was statistically significant.
Some people look for wins in secondary metrics, such as add-to-basket rate or bounce rate, because they are not seeing the uplift in their primary metric, like RPV or conversion rate. However, they are called secondary metrics for a reason.
If you then embed what you think is that winner onto the site – as part of the code – this version might not actually be a winner, perhaps you declared the test result earlier than you otherwise would.
This could hurt and definitely won’t help sales, but since it’s no longer part of a split-test it will be hard to detect the damage it is doing to sales.
Problem 2 spawns a third problem where you now think your winning idea is a rich seam for other optimisation ideas. So you start looking at your list of test ideas with this filter in mind. All the time you could be shutting out other ideas and opportunities that could deliver far bigger uplifts.
This is not to say that you can rid yourself of confirmation bias, but you can be alert to the role your own beliefs and knowledge are playing in understanding users, interpreting split-test results and what opportunities to dig further into.
The best and most truly scientific (but perhaps most difficult) way of approaching it, is to say we are going in to this with an open mind and actually we don’t care what happens, we have put this test out there and whatever happens, happens – we will learn from it no matter what the result.
This is mistake number four in a series of seven posts, outlining the mistakes that almost everyone makes in conversion optimisation. Keep an eye out for the remaining three mistakes over the next few months.
Mistake #3: Are you assuming your GA data is accurate?
Read our ebook below to find out how a focus on three power metrics can double your CRO success.
Sign up to our newsletter and get all of the latest news straight to you.
If you’re serious about initiating change within your business, we’d like to offer you a 60-minute Initial Strategic Review.
“We’ll share what we’ve learned from decades of experience working with businesses using optimisation, innovation and experimentation to achieve business goals like yours”