Skip to main content

The impact of A/B-testing on site performance

A/B-testing allows you to test the business impact of changes to your site or app within your experimentation programme. It’s the smart thing to do. 

However, some managers are concerned about the impact of A/B-testing on site performance. In particular, there are questions about page load time. We also hear questions from teams who have previously done A/B-testing and experienced “flicker”. 

Stella Chatzianagnostou, Experimentation Engineer at AWA digital elaborates on these consequences of client-side testing. She explains what you can do about it.  

What is flicker?

Flicker happens when the end user can briefly see the original experience flash up on their screen, before the variation is displayed a moment later. Hence it’s also known as Flash of Original Content (FOOC).

It’s a consequence of client-side testing, where changes in the variation are made in the end user’s browser. Server-side testing, where the changes in the variation are made on your server, is not susceptible to this problem.

Here’s why it happens:

Every time you click on a webpage, a request is sent to the server. When a test is running on that page, extra code is added to the page in the form of a javascript snippet.  

Adding javascript and additional code means a tiny bit of additional time is needed for the page to load. Mostly, users won’t notice it at all. On average an added snippet like this could add about 350 to 500 milliseconds to a page load time.

But, in some cases it’s possible that the additional time introduced by the snippet causes flicker.

What can be done to reduce the potential of flicker?

Our engineering team creates client-side A/B-tests every day, and has done so for almost ten years. They have figured out how to minimise the risk of flicker. Here are their top tips.

  1. Follow the installation instructions. Most martech tools can be deployed on your site via a tag manager like GTM. This is fine in most cases, but not for testing software. Make sure you have followed the vendor’s installation instructions to the letter. 
  2. Keep code as clean and minimal as possible. This means no redundancies or repetitive code. Archiving concluded experiments can make a difference if your platform allows you to do this. That way, experiments can be run again or code can be reused.
  3. Limit the number of requests to servers. This contributes to flicker or delays in loading time. It can be done by unlinking extra images or info not currently needed on the page. The code has to be efficient and modular, and should ideally be reusable.
  1. Hide the page while the variant is loading. Some tools offer this capability – like the Google Optimize anti-flicker snippet. On occasion, our developers have built extra progressive loading logic to achieve a similar effect.
  2. Consider using a task manager. This can help in reducing the page load impact by controlling how the extra code is directly implemented on the page.
  3. Ask for more info about server locations. This is especially applicable to companies with customers all over the world. Depending on where the end user is located, the server closest will automatically be selected. Companies with more servers will therefore be able to deliver faster response times – and less delays and flickering. A company with thousands of servers all over the world is preferable to one with only 300.
  4. Check performance analytics of the test variant. Your testing partner should be able to compare page performance under control versus the variants. Ask about this and check if there are significant changes in load time. Some allow you to see associated replays of some of the worst examples to determine whether the issue is flicker, or something else.
  5. Choose the right tool for your unique context. There are many client-side testing tools on the market. If you have concerns about flicker or technical performance, make it part of your RFP process and ask vendors to respond to this specifically. 
  1. Sharpen the test hypothesis. Don’t test for the sake of running tests. Base your ideas on analysis and research. Prioritise your testing roadmap. Be clear in advance how the results will inform business decisions. 
  1. Run a split URL test. If you are testing massive changes, for example a redesign of an entire flow with large images, this will inevitably take longer to load. An option is to develop an MVP (Minimum Viable Product) of this parallel flow or alternative page in the back-end, and then use your testing tool to divert 50% of the traffic to this new content. In this scenario the heavy lifting is not done in the user’s browser, but on the server itself. 

Isn’t server-side testing the ultimate solution?

Compared to server-side testing, client-side testing is low friction, low cost and fast. It’s technology agnostic so it works with any tech stack. It’s also possible for non-technical team members to build and execute simple experiments without the help of developers. 

Occasional risk of flicker, which can be addressed as described above, is a small price to pay for that. The benefits likely outweigh the cost. 

“The only way to win is to learn faster than anyone else”, says Eric Ries, author of The Lean Startup. That is what client-side testing allows you to do.

With each test, you will get invaluable information about your business and your customers. Insights that you would not be able to obtain in any other way. These will not only inform short-term changes at your company, but point to the direction of the future of your business. It will reveal customer behaviour, trends and preferences. 

And the point is: you can do all of that faster with client-side testing.

By Stella Chatzianagnostou, Experiment Engineer, AWA digital 

Keep up-to-date

People from Facebook, FarFetch and RS Components receive our newsletter. You can too. Subscribe now.

Interested in turning experimentation and testing into an advantage for your entire business?