<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1375589045796407&amp;ev=PageView&amp;noscript=1">
Discover the real conversion levers on your web pages with exclusion split tests

Discover the real conversion levers on your web pages with exclusion split tests

Does the product recommendations engine on your product details page help or hinder conversion? How important are reviews on your product listing pages? What impact do the security badges in your checkout have?

Ecommerce web pages are made up of dozens of individual design elements which, to a greater or lesser extent, play a role in persuading visitors to buy from your website. Discovering the size of the role they play can provide you with powerful insight about how you can improve your site for more sales.

But how can you discover how much value an individual design element has on a page? The answer is by running exclusion split tests.

What are exclusion split tests?

istock_000036443994_small.jpgExclusion split tests involve creating a new variation of a key page with one element hidden. From this, you can discover how important this element, or the message within that element, is to your visitors in that position, or on that particular page.

In many conversion optimisation systems, split testing comes after a period of research, prioritisation and planning. Exclusion split tests are different in that they are used as a research tool, run during the insight gathering phase to inform future split tests and insights.

What you can learn from exclusion split tests

Exclusion split tests make great research tools because you’re not relying on what people tell you (which can be flawed) but instead you’re observing behaviour in a real scenario. Running these tests give you valuable insight about the various conversion levers on your site and can help you to better understand the customer decision making process, which is a crucial starting point in ecommerce optimisation.

For example, if you remove a ‘free delivery’ message on the product details page, you’re not merely assessing the placement or wording of that message, you’re also discovering how the notion of ‘free delivery’ influences behaviour at that point of the visitor journey.

With messages which appear throughout the visitor journey, this insight can be further expanded. Running exclusion split tests on the same message higher up the funnel (i.e. on the home and category pages) or further down the funnel (i.e. within the basket and checkout pages) may lead to different results. All of these learnings collectively give you valuable insight about the customer decision making process throughout their journey on your site.

What to look out for when running exclusion split tests

10555122-fachada-del-edificio-abandonado-con-tres-puertas-Foto-de-archivo.jpgLike all split tests, when you run an exclusion split test, there are three possible outcomes – the variation either performs better than the original, worse than the original, or the same as the original.



Removing the element increases RPV and/or other KPIs

You’ve learnt that this element is not important to the user journey in its current form or position.

Do you really need that element? Does it fulfil a business need – in which case can you develop a hypothesis around testing alternative executions or positions – or can it be removed?

Consider the fundamental implications beyond this individual element. What does it mean that the specific message deters visitors from buying? What insight can you draw from it about your visitor’s needs, wants and motivations? Formulate a hypothesis around this and create future tests to discover more.

Removing the element decreases RPV and/or other KPIs

You’ve learnt that this element is important to the user journey and should remain on any future designs you test on this page.

What insight can you draw from this? Formulate a hypothesis about the other implications of this. Can you amplify this message, or principle, either on this page or elsewhere in the funnel? Might there be additional opportunity by testing alternative messages or design executions of this element?

Removing the element has no impact on RPV and/or other KPIs

This suggests that this element in its current form neither helps nor hinders user behaviour on this page.

Do you really need that element? Could you hypothesise that downgrading its importance, in order to allow you to give other elements increased visibility in the eye path, could improve performance?

Also think about the wider impact. What was your original hypothesis about that element, and how does this finding prompt you to adjust that hypothesis? For example, what does it mean that a given message doesn’t influence behaviour in the way you expected? What insight can you draw from it? Might running an exclusion test of the same message on pages elsewhere in the funnel help you learn more?

What have we learnt from our exclusion split tests?

laboratory_test.jpgExclusion split tests allow you to examine the value of the underlying message or psychology motivating visitors to your site in a way that other insight generation tools and techniques that rely on conscious thought, such as surveys and usability studies, are unable to do.

The value of the insight these tests can generate is why we’ve started to incorporate exclusion split tests as part of the insight generation stage of our Conversion System. We have used exclusion split tests to discover the contribution to key KPI’s of elements such as value proposition messaging, product recommendation engines and ecommerce functionality believed to be a key driver to the purchase.

In one experiment, we tested the importance of a money-back guarantee message on a gift site and found that removing the money-back guarantee message increased RPV. Investigating this further we discovered that visitors didn’t want to contemplate the possibility of having to return something they bought as a gift. So this message, rather than reassuring buyers, was actually putting some visitors off purchasing. It’s the kind of insight that would have been very difficult to access through other forms of research.

In another test on an ecommerce website, we found that removing the left hand navigation and filtering options from a product listings page, and consequently making images larger, increased RPV. This contradicted the information we received from users saying that they found filtering useful. This insight helped shape future tests as we had evidence that the way their visitors searched was less about logically hunting for something specific (as some survey data suggested), and more about browsing for inspiration using fast, instinctive and emotional thinking.

What have you learnt from exclusion split tests?

What have you learnt about your website visitors from running exclusion split tests? Did the results confirm your hypotheses or surprise you? We’d love to hear about your experience. Share it with us in the comments section below.

Read our ebook below for 8 questions you must ask to find, hire and get great results from CRO professionals.

New Call-to-action

 
 

Get a Free Eyetrack

A visual map of your landing page

Yes Please show me what
my visitors look at

Thanks - but no thanks