Why does poor quality test code cause flicker, leaving your test data worthless [Mistake 7/7]

On August 20th 2018, the departures board at Gatwick failed. For a while there was no information as to when flights were leaving and where they were flying to. Fortunately for passengers, the airport reverted to whiteboards and using megaphones to notify passengers, but the loss of the departure boards created an obvious problem – there was missing data – and it was vital data. Nobody likes missing their flight.

Let’s deal with a truth about split-testing. Poorly written code and incorrect instrumentation will completely undermine your conversion programme, because the results of your split-tests won’t be robust and trustworthy. And if you can’t trust the results you will be flying blind, not knowing in which direction you are headed.

Sometimes the problem will be obvious. You will be able to spot flicker – this is when there is a perceptible delay between the website loading the variant. That flicker will be noticeable to users and will disrupt their experience, undermining the results of what you were testing.

driving in to fog imageOther times you may not see the problem, it may only be affecting a certain set of users or those with a particular browser, and if you don’t spot the issue first, then you will assume the results you are getting from the test are bona fide – and they won’t be. It’s what Donald Rumsfeld would call an unknown unknown. You had an unknown problem with your split-test and the impact on the result is unknown.

Even if the test is behaving as expected, there can be problems with the instrumentation; meaning that the system is not receiving or correctly recording the data. Again, the results will ‘look right’ but won’t be and even worse, you are probably not even aware of the issue.

The mistake you might be making here is assuming that any front-end developer can code a split-test. Yes, in terms of capability, they may be able to code it, but ask yourself these questions:

  • Does your developer have enough experience of coding split-tests to make sure it renders correctly?

  • Will the experiment work for all users and configurations?

  • Will the correct pages be excluded from the test?

  • Will the results data be correctly transmitted and recorded to the testing platform?

Experienced split-test developers know what works, and most importantly, what can often go wrong. Don’t go to all the hard work of getting inside the heads of your users, developing a hypothesis only to underinvest in the expertise of your development team.

 

Interested in reading more?

So to recap, if you want to get the most of your current CRO programme here’s what I would do:

  1. Don’t obsess about the layout of web pages

  2. Focus on speaking to your users and customers

  3. Make sure your web analytics is producing accurate data

  4. Recognise your biases

  5. Focus relentlessly on split-test velocity

  6. Prioritise systematically

  7. Use experienced split-test developers

Read our ebook below to learn more about how the three power metrics – win rate x effect x velocity – can make your conversion programme a success alongside your prioritisation system.

FREE EBOOK

Discover how businesses have made the shift from CRO to experimentation – and you can too

Download your copy today >

Sign up to our newsletter and get all of the latest news straight to you.

Request an Initial Strategic Review

If you’re serious about initiating change within your business, we’d like to offer you a 60-minute Initial Strategic Review.

“We’ll share what we’ve learned from decades of experience working with businesses using optimisation, innovation and experimentation to achieve business goals like yours”

Johann Van Tonder, COO, AWA digital.

This is an opportunity for ambitious online businesses who are already generating significant revenue… and know they can bring in more… but are not sure exactly how.

BOOK YOUR INITIAL STRATEGIC REVIEW NOW >>

We partner with the leading CRO technologies