Why does poor quality test code cause flicker, leaving your test data worthless
On August 20th 2018, the departures board at Gatwick failed. For a while there was no information as to when flights were leaving and where they were flying to. Fortunately for passengers, the airport reverted to whiteboards and using megaphones to notify passengers, but the loss of the departure boards created an obvious problem – there was missing data – and it was vital data. Nobody likes missing their flight.
Let’s deal with a truth about split-testing. Poorly written code and incorrect instrumentation will completely undermine your conversion programme, because the results of your split-tests won’t be robust and trustworthy. And if you can’t trust the results you will be flying blind, not knowing in which direction you are headed.
Sometimes the problem will be obvious. You will be able to spot flicker – this is when there is a perceptible delay between the website loading the variant. That flicker will be noticeable to users and will disrupt their experience, undermining the results of what you were testing.Other times you may not see the problem, it may only be affecting a certain set of users or those with a particular browser, and if you don’t spot the issue first, then you will assume the results you are getting from the test are bona fide – and they won’t be. It’s what Donald Rumsfeld would call an unknown unknown. You had an unknown problem with your split-test and the impact on the result is unknown.
Even if the test is behaving as expected, there can be problems with the instrumentation; meaning that the system is not receiving or correctly recording the data. Again, the results will ‘look right’ but won’t be and even worse, you are probably not even aware of the issue.
The mistake you might be making here is assuming that any front-end developer can code a split-test. Yes, in terms of capability, they may be able to code it, but ask yourself these questions:
- Does your developer have enough experience of coding split-tests to make sure it renders correctly?
- Will the experiment work for all users and configurations?
- Will the correct pages be excluded from the test?
- Will the results data be correctly transmitted and recorded to the testing platform?
Experienced split-test developers know what works, and most importantly, what can often go wrong. Don’t go to all the hard work of getting inside the heads of your users, developing a hypothesis only to underinvest in the expertise of your development team.
Interested in reading more?
So to recap, if you want to get the most of your current CRO programme here’s what I would do:
- Don’t obsess about the layout of web pages
- Focus on speaking to your users and customers
- Make sure your web analytics is producing accurate data
- Recognise your biases
- Focus relentlessly on split-test velocity
- Prioritise systematically
- Use experienced split-test developers
Read our ebook below to learn more about how the three power metrics – win rate x effect x velocity – can make your conversion programme a success alongside your prioritisation system.
People from Facebook, FarFetch and RS Components receive our newsletter. You can too. Subscribe now.
Interested in turning experimentation and testing into an advantage for your entire business?