<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1375589045796407&amp;ev=PageView&amp;noscript=1">
How to stop technical glitches ruining your split tests

How to stop technical glitches ruining your split tests

Split testing is a vital part of any web optimisation programme because it’s proof that you’re making improvements that will bring in higher revenues.

But split tests can also be the weak link in the chain. If there’s a technical fault with the split test, you won’t be able to bank the profits.

Most ecommerce professionals are savvy enough to avoid rookie errors like declaring an A/B test winner without adequate statistical confidence or wasting time on split testing an area of the site that would never have a big impact on conversion.

But less well known are the technical hazards on the development side. Watch out for these development pitfalls that can have a big negative impact on the integrity of your results.

1) Invalid HTML within your split test code

The tech speak: Modern web browsers are extremely forgiving when it comes to invalid HTML code. There could be inconsistent nesting of tags, missing characters, unexpected tags, etc. The problem is that when browsers encounter invalid HTML they need to make a guess about what you really meant. Each browser has its own set of rules on how to handle different scenarios and if your split test includes content “corrected” by the browser, then you’ll likely have issues.

While a page with invalid HTML may display correctly, it’s underlying incorrect structure can cause a lot of problems. This is especially troublesome if you were to pull in invalid HTML content externally via Ajax because the browser may not self-adjust the HTML structure the way it would normally if that content was rendered on its own page.

What this means in layman’s terms:  When invalid HTML creeps into your split test code, all sorts of bad things could happen: layouts may display strangely, forms may not submit, functionality can break etc. To make matters worse – the things you see (or don’t see) vary according to what browser you’re using.

How to overcome it: Always test your split test variations in each of your most popular browsers and devices. If it doesn’t look or work correctly in certain browsers and invalid HTML is suspected, compare the elements in question using both view-source and the browser’s development tools for any differences in code.

2) Simultaneously running split tests

This practice is currently hotly debated by the strategists and statisticians, who have long lists of tests they want to run but are aiming for purity of results to get the maximum learning from each test. Simultaneous running of experiments requires special consideration to make sure the results of one don’t influence the results of another and skew overall results.

However, from a technical point of view, running two tests at once can also cause challenges and requires extra consideration from a technical perspective.

The tech speak: Let’s say you developed a long-running split test that removed a section of your webpage, like the sidebar. After some time goes by, you might put this test out of your memory while it collects data. Two months later, you develop a second split test that modifies content in the sidebar. For those visitors who have been bucketed into the variation of Test 1, Test 2 will fail because there will be no sidebar to modify - Test 1 had already removed it.

While this is a simple example, it illustrates how simultaneously running split tests can complicate debugging in this manner.

What this means in layman’s terms:  Running tests simultaneously doesn’t just potentially impact stats, but it can also impact the way your experiments run.

How to overcome it: Adopting the practice of logging the title and version of each running experiment in your split testing tool is a smart practice that could save a lot of headaches when determining if there are conflicting experiments. This is especially applicable if there are many people developing split tests in your organization who may not be completely aware of the affected site elements created by another developer’s tests.

3) CSS class / selector conflicts within your split test code

The tech speak: A CSS class conflict occurs when there are opposing rules applied to the same element. Although these conflicts are not unique to split tests, they can easily occur if you’re working on a fairly large site with many intricacies that you may not be aware of. There are a number of issues that could spring up from CSS conflicts, but here are two common ones:

A - Modifying existing elements

You may be developing a site-wide split test that modifies divs with a particular class name, let’s say it’s something like <div class="info">. While the majority of the .info divs across the site pages look the same, it’s possible that there are other .info divs that you did not intend to modify or they might have entirely different requirements to implement. It may not be completely obvious as these elements could be buried on stage 4 of a checkout flow.

What this means in layman’s terms: Changes made to a certain style as part of the split test may inadvertently impact similar elements elsewhere on your website that use the same style name.

How to overcome it: Be extra careful with site-wide experiments and spend some time browsing around - you’ll be surprised at what you can miss.

B - Adding new elements

You may be adding elements via your scripts with a 'new' class name that may actually already exist on the website with pre-existing styles.

What this means in layman’s terms: Introducing new style elements can unintentionally impact elements elsewhere on your website if they have not been given unique style names.

How to overcome it: To avoid this issue, we typically prefix any added classes with “AWA-” so that there are unique selectors for any new elements being added. This also helps when you need to target these new elements since you’ll know they’ll always be unique.

4) Over-relying on emulation when QA-ing split tests

The tech speak: The development tools in today’s modern browsers work wonders for web development diagnostics. However, it can be all too easy to over-rely on them. The built-in emulators are extremely beneficial to split testing since you can easily test your code on a wide range of device categories including laptop, tablet, and mobile. While this is extraordinarily convenient, it’s important to know that it is not possible for the emulated browser to replicate every bit of functionality as found in the actual device.

What this means in layman’s terms:  Modern browsers allow developers to test their code on different devices from their development PC. However, these do not always represent a like-for-like experience. Knowing that these differences exist is important so you don’t make assumptions about how well your tests will perform if you only use the built-in emulators in the browser.

How to overcome it: There’s no real bulletproof method to dealing with this issue without purchasing and testing on every possible device. A practical strategy would be to go through layers of testing with your team who will inevitably be using a wide range of devices. As a developer, you should test with at least one physical mobile device of your own and test the rest with emulators, including the built in browser emulator as well as a cross-browser testing service.

Another reason to test on a physical device is that it will remind you of the different user experience your customers will have. For example, browsing a site on a mobile device requires finger taps which are much less accurate than mouse clicks, especially on small devices. Emulators will not manifest these potential usability issues.

5) Overlooking dynamic or contextual content when QA-ing split tests

The tech speak: A common “got ya” is when a test fails based on the absence of HTML which could change based on whether or not a user is logged in or some other previously entered information from the user. This is especially true when you are running a test through a checkout flow. The forms on these multi-page flows need thorough testing because there are almost always cascading results based on their input.

What this means in layman’s terms: Split tests on pages that differ according to visitor ‘state’ (i.e. logged in / not logged in or paying by card / PayPal) need special attention when testing the code to make sure every visitor type sees what they should see.

How to overcome it: Plan accordingly. Testing all the possible outcomes can be quite laborious but it helps if you know what the relevant variables are: promo codes, saved addresses, atypical notifications, etc. Go through and categorize the root of these content changing elements and test each category individually and together until you have enough information to create a “map” of the variables outcomes.

Summary

They say that forewarned is forearmed, and if you’ve read this far, you’ll now be aware of some of the development dangers of split testing. No matter how skilled you are as a developer, you will inevitably write bugs in your tests, but being able to identify the many possible hazards of your split tests makes it that much easier to debug.

Your co-workers and customers are paying more attention to the details throughout their experience. On the other hand, you are anticipating the final result or “destination” of the split test.

Whether you are developing split tests for your own company or for someone else, the most important thing is to go through layers of testing starting with yourself and then with your colleagues. This is key to developing high quality split tests, with reliable results that will translate into ever increasing sales, revenue and leads on your website.

 

If you think you need help from the experts, read our ebook below for 8 questions you must ask to find, hire and get great results from CRO professionals.

New Call-to-action

 
 

Get a Free Eyetrack

A visual map of your landing page

Yes Please show me what
my visitors look at

Thanks - but no thanks