The AWA Periodic Table of Conversion Rate Optimisation Success Factors
Conversion Rate Optimisation – or CRO – may seem like a dark art if you’ve never experienced it first-hand.
Conversion Rate Optimisation (CRO) is the process of making iterative improvements based on user research and analytics, to increase website conversion rate and online revenue. That sounds simple in theory but putting it into practice can be complex, as there are many different factors involved.
Ecommerce companies who embrace CRO claim to get stellar results and ever increasing online sales.
If you’d like to replicate their achievements, then the first step is to understand the CRO process as a whole. To help with that, we’ve created this infographic which gives an overview of all the key elements involved in successful CRO, designed like the well-known periodic table for chemical elements.
Each factor has been given a weighting to indicate its relative importance in the CRO process. We’ve also noted all CRO tactics including accelerators that can improve your chances of success and negatives that that can hinder it.
Each sub-group has a number of tiles which represent individual elements within that part of the process. Each tile has a title, with a two-letter abbreviation for easy identification, similar to the traditional periodic table you may remember from school. At the outer edge of the table you’ll find a short description of each element.
In addition, each tile has a weighting in the top right hand corner. Successful CRO comes from a combination of different factors, some more important than others. This weighting is our attempt to indicate which factors are most critical; the higher the number, the more important it is, based on our experience.
CRO is a process. Getting great results is not just what you do, it’s also how you do it. Attitudes and culture can improve your CRO success rate or hinder it. Being aware conversion rate optimisation best practices such as accelerators and inhibitors allows you to alter your approach to wring the maximum benefits from your CRO programme.
Research and analysis is the starting point for any successful CRO programme, as it steers the direction of your efforts and lets you know exactly what you should be testing and in what order.
You should approach this part of the process like a blank canvas with an IDK attitude (I Don’t Know). In other words, come to it with an open mind that’s willing to investigate in a systematic and objective way.
On our Periodic Table of CRO Success Factors, the major group of Research and Analysis is divided into three sub-groups:
The different types of research techniques offer different kinds of insight and you should aim for a balanced approach. Quantitative data can tell you what is happening, and qualitative can tell you why.
Here we have listed some of the main methodologies to enable research and analysis to take place and techniques to get a CRO process kicked off. Some of these involve CRO tools to be installed on your site.
Analytics has been placed in the sub-group of quantitative data because we would expect it to be already installed (although whether it is configured to get the most from the reports is another matter).
Survey tools are fundamental to a robust CRO process.
These surveys can take many forms, including email surveys and on-page pop ups, from a single question to detailed questionnaires filled out by the visitor when they leave the site. Surveys also include tools to recruit people willing to take part in real-time usability studies.
Mapping tools enable you to track visitor behaviour and activity, which is why they’re such a crucial addition to your CRO toolkit.
They provide visual maps of behaviour based on actual data from clicks, or predictive data based on algorithms and artificial intelligence. By examining the data and patterns generated you are able to use heatmaps to increase your conversion rate. Examples include click maps, scroll maps, confetti maps, mouse movement maps, overlay maps, list views, gaze plot maps, attention maps and more.
A/B testing is essentially a method used to test how visitors respond to your website, and the various elements of it.
Having a split test platform is a vital part of the Execution phase of a CRO process, when new web pages or experiences are tested against the existing. However, it’s best to get it installed as early in the process as possible. One reason is that split testing can be used as part of the research phase to answer specific questions. A number of platforms are now available such as VWO and Optimizely. For more information, there are lots of split testing articles in our blog.
The systematic review of a website using an evaluative framework is one of the least understood of all research techniques. The danger is that it can encourage people to simply offer their opinions and express personal likes and dislikes about the site – which is little more than guessing.
To get the most out of your heuristic review, always use a framework to ensure all bases are covered, opinions are recognised as such and not declared as facts, and insights are kept as objective as possible.
It is astonishing how often there is a wealth of data in the company which is simply never looked at. Using existing data from live chat scripts and complaints forms to full blown focus group debriefs, the chances are there could well be a gold mine of information sitting around waiting to see the light of day. Take the time to gather it all together as part of the setup process.
Setting up these tools normally takes just a week or two, and then the site is ready for in-depth CRO research and analysis to begin.
Quantitative research (hard numbers) often gives you the factual evidence that forms the backbone of a successful CRO programme. It is especially useful to prioritise and contextualise potential opportunities.
Quantitative research can help to establish whether the issue is something experienced by a number of visitors, and therefore potentially lucrative, or whether it’s a bugbear of that particular tester, that other people, generally aren’t bothered about.
On our periodic table, we have focused on four of the major types of qualitative research which are used on almost every CRO project.
Surveys are a tool used in both qualitative and quantitative research, because even though they are qualitative in nature, it’s possible to count responses and then work with it numerically. This is true even for open-text responses. For example, asking abandoning users why they didn’t complete their purchase, you can categorise their responses into a range of themes and then rank them by occurrence rate.
Google Analytics (GA) and similar packages, which track onsite behaviour, can be a goldmine of insights. Use it to plot the journey through your site, see where visitors are leaking out of the funnel, identify under-performing pages and much more. To get the most out of GA, we recommend that you segment all your reports. This could be as simple as segmentation by device category or visitor type, but think about what’s relevant in your business.
As we alluded to earlier, it is vital to ensure that your Google Analytics data is not “dirty”. Poor analytics configuration often results in inaccurate, incomplete or “dirty” data. Making decisions based on unreliable data could be hurting your business and costing you money.
Heat and click maps show exactly where visitors click on the site, regardless of whether it’s a hyperlink or not. There are so many on the market currently that we can confidently say that we have managed to test more of these maps than we can remember.
If you are not using heat, click and scroll maps to understand the behaviour of your visitors or would like to see comparison, be sure to read our blog on the 5 heatmap tools to increase conversion rates, or go straight to Crazy Egg vs Hotjar for a head-to-head of 2 of the most popular heatmap tools available.
Are people clicking where you expected? Are they clicking on areas that you didn’t expect? How does the click pattern of your new page compare to a previous variation? What inferences can you draw from the difference? Free EyeTracking reports allow you to see where visitors are clicking on your website.
Qualitative research (what people say) is vital to help understand why customers and visitors behave in a particular way. It is ideal for discovery and exploration, and a great way of generating insights.
In the periodic table, we have included five main types of qualitative research. Although other methods can be used, these alone bring enormous value to any CRO programme.
Usability testing is a powerful qualitative technique, if done well. The term covers a huge range of methods from quick panel-based videos to high tech laboratories with testers wired up to monitors and observed through one-way mirrors.
We advocate speaking to visitors and customers using a screen-sharing system when they are actually on the website, or have just placed an order. (There are specialised tools that enable you to recruit testers in real time). One advantage is that everything is fresh in their mind – the reasons why they were on the site, the thoughts they had, and any difficulties or frustrations they had to overcome to make their purchase.
To get valuable data from this type of research it is important to truly listen to the tester, and not guide, lead or influence their behaviour.
This means becoming a customer yourself. Buy the products, visit the stores, make a return, and observe and record every step. Do this on different device types.
Surveys can be a valuable source of data, yielding excellent insights very quickly. The skill lies in asking the right questions, and, of course, taking the time to read through the daunting piles of responses.
Live chat transcripts can give you valuable nuggets of insights, especially where the same questions come up time and again. It’s always worth talking to the operators of both live chat and the call centre. They are at the sharp end, with the best knowledge about web visitors and customer of anyone in the company. They rarely get asked, but when they do, they are more than happy to share their information.
A value proposition is the primary reason why someone should buy from you rather than a competitor, and it’s amazing how many websites don’t communicate this clearly. Testing the value proposition is considered by some experts to be the single most important piece of conversion optimization advice. It is included here in research because it is something that should be discovered, not invented.
Many websites do not have a value proposition, or if they do it is inadequately expressed. Counter-intuitively, you are probably the worst person to express it, because you are too close. What you believe you are selling may not be what the customer thinks they are buying. Research with your customers into the value proposition should focus on how the website solves a problem or improves a customer’s situation, the specific benefits and why they buy from this website rather than a competitor.
Research and Analysis forms the bedrock of a successful CRO strategy. Execution is where those insights and data are used to create tangible improvements to the website.
On our Periodic Table of CRO Success, we have shown four sub-groups in the Execution phase of a CRO process, with factors to consider in each:
As Bill Gates says, ‘Prioritisation is effectiveness’. The difference between a well prioritised Optimization Plan or road map and a poor one can be the difference between success and failure. The CRO Research and Analysis usually yields dozens of areas which could improve the revenue and customer experience of a website. How do you decide which to go for first?
Which ones can you safely ignore, leaving you and your team free to focus your efforts where they’ll make a difference? A number of methods of CRO prioritisation are used by different practitioners, such as Bryan Eisenberg’s Time-Impact-Resources model, but in general they all work by assigning a weighting to each issue. The factors used in calculating the weighting include the elements shown here:
We use the Evidence, Potential, Ease (EPE) framework developed by our optimisers, with each idea given a score of 1-5 for each of the criteria. Use the table below as a guide.
There can only be a few of these.
|Very high potential, such as critical usability issues with high occurrence rate, in area where high drop off is observed.
Situated in an area of the site with high traffic, and relatively high contribution to total revenue.
|Strong evidence from multiple sources points to it being an issue or opportunity.||The test is easy to code and implement.
It enjoys the full support of everyone in the organisation.
|★★★★||High potential, such as a critical usability issue or characterised by a high drop off rate.
May occur in an area of the site with slightly less traffic or where contribution to revenue is slightly lower.
|At least one strong source of evidence.||The test is easy to code.
It enjoys the support of most people in the organisation.
|★★★||Medium potential, such as a usability issue ranked Medium, or one with a lower occurrence rate.
Could also be high potential idea in an area of the site where there is less traffic or smaller contribution to revenue.
|At least one source of evidence.||The test is relatively easy to code.
It enjoys wide support in the organisation.
|★★||Medium to low potential idea in area of the site with low traffic or low contribution to revenue.
Could be a usability issue marked Low.
|May have only one source of evidence, or weak evidence.||Relatively complex to code.
Or idea does not have wide support in the organisation.
|★||Low potential, occurring in areas with low traffic.||Weak evidence, or no objective evidence.||Ideas that are difficult to implement should fall in this category, even if it has higher potential.|
Source Table from E-commerce Website Optimization by Dan Croxen-John and Johann Van Tonder.
Hypothesis development is born out of the research findings. The time, skill and effort that goes into developing hypotheses statements has a crucial impact on the overall success of a CRO programme.
The statement should recap the data observed and feedback received, and formally describe the suggested change and the reasons why it is believed that will lead to improvement. The change is represented in a wireframe, and subjected to peer review where possible to ensure it delivers on the statement.
Good hypothese are key to ongoing success and formally developed hypotheses ensures you are focused on the customer and their needs, which in turn means you are more likely to test something meaningful that gets results.
A hypothesis is a prediction that making specific changes to your website will results in sales increases. They should be based on evidence about customer behaviour and feedback.
“Does it state the intervention, the anticipated behaviour and the target group?”
A wireframe is the bare bones of a web page, displayed as a mock-up of the proposed layout of the new page you want to test. A wireframe is usually a simple line drawing with copy and placeholders for images. If you include too much ‘design’ here, when it is reviewed it will be hard to focus on the core functionality, so simple is better in this instance.
“Does it address the issues identified by the data, so that it can be split tested?”
Reviewing the wireframe and copy is necessary, as with all creative tasks, there may well be more than one viable creative solution. That is why is can be useful to get feedback before launching the test. When reviewing it, you need to ensure it delivers on the hypothesis.
“Has the wireframe had feedback from the wider team?”
Clearly, the new variation needs to follow your brand guidelines, to reduce the risk of other variables affecting the split test result. In many cases this is something your developer can do easily with basic HTML and CSS. However, sometimes a designer is needed to create new icons or images, and to make sure the spacing, layout, hierarchy and typography looks professional, with copy and content putting the message across in a clear and compelling way.
When test results come in, how confident can you really be about those results? For example, is your sample size large enough to make inferences about the broader population? This is not a matter for guesswork; it should be calculated before the test starts. Run any test over at least two commercial cycles – for most businesses, this will be two full weeks. Don’t pause it earlier, and don’t extend its run just because you didn’t get the desired result.
When the test is declared, resist the temptation to cheer for the ‘winners’ or bury bad news for the ‘losers’. Both are learning opportunities, and the negative uplifts can often teach you much more, and indirectly lead to the real breakthroughs.
You won’t always get a 34.7% uplift from a single test, as we did in this Northern Parrots case study.
Be methodical about analysing each test result and logging the learnings to inform the next test and future CRO activity. Run iterations of both positive and negative tests to get the most value from them before moving on to the next issue, but do this in the context of your overall roadmap.
MVT tests should be used with caution. Statistically, 95% confidence means that 1 in 20 of your positive results is actually negative. MVT’s create new variations for each variable added, multiplying the potential for misleading results. As a precaution, the confidence level can be increased to 99% but this still leaves test results open to a high level of false positives. Provided you have enough traffic, MVT’s are useful as a learning exercise to identify conversion levers. We recommend always running an A/B test to follow up on MVT tests, to help minimise the risk.
For more on split tests, see A/B Tests in our blog.
The success of a conversion rate optimisation programme can often be improved by HOW you do it, rather than WHAT you do. In particular, there are some common traps that inexperienced optimisers fall into.
We have identified six of the most common negatives that can put the dampeners on any CRO programme:
Time and again we have been told by some harassed middle manager, ‘I know the data says that, but the board just want us to do X’. Or even, ‘The CEO was at a dinner party and the hostess said she thought the website looked old-fashioned, so we have to make it look more modern”.
On hearing something like this, there is often an automatic assumption that the suggestion has to be top priority or ignored. A better course of action lies in between. We recommend putting it into the system and subjecting it to prioritisation along with other test ideas.
Ideas that don’t come out of research and data rarely work as well as ones that do. While your resources are tied up, you’re missing opportunities elsewhere. But sometimes these random ideas do very well, so allowing them to be evaluated systematically increases the chances getting a winner.
Peak periods of trading are a great time to be testing as you have so much more traffic. However it’s easy to get hoodwinked by a great test result if you ignore seasonal trading patterns that could influence the result. This also works in reverse, if you dismiss a negative result that at any other time of year would have shown an uplift. Make sure you recognise the season you’re in when you’re reading the results to true value from them.
Neuroscience is starting to reveal just how hard it is for human beings to make decisions based on evidence alone. We have a natural tendency to interpret the data to fit our own prejudices. Try to stand back, be objective and be open to other possibilities of truth. One of the hardest things to do is run a test without being emotionally invested in the outcome. You want your variation to win! However, this increases the risk of interpretation bias. If you take a different approach, eager to learn from the outcome of an experiment, you always win whether the result is positive or negative.
Small changes definitely have their place in CRO. For example, simple copy changes often have huge impact. If you’ve had a win, then planned, conscious iterations of tiny elements can lead to even bigger revenue uplifts. However that’s very different from little tests without any basis in data, such as altering the font, or changing the colour of a button.
Gather a few people in a room and you’ll quickly generate a long list of ideas to test. How about new banners or different promos? It may seem like logical sense to A/B test these. The problem is that you have a finite number of testing slots and should be very disciplined about how you occupy those. If your customers are put off buying because of an issue around the way the product page information is presented, or a hard-to-follow checkout, tempting them with a better banner is going to have little effect.
The internet is awash with lists of rules about what makes a good website. Of course, much of this is sensible stuff, especially when you’re designing a website from scratch. But the danger is that it lulls you into thinking that’s all there is to it. Every website is different, so what’s ‘best practice’ for one won’t necessarily uplift sales on another. Success comes from finding out what’s right for your website, your business model and your customers, and making it happen.
If you want to turbo-charge your CRO success rate, you need to be open to changing your attitudes and doing things in a different way.
We’ve identified four areas where adopting a CRO mindset that permeates your entire CRO activity will reap rewards in the long term:
Be like a child, full of curiosity and wonder. Be open to discovery, and constantly interested in what’s going on under the surface. Wander into every nook and cranny of your website and try to look at what’s there with a fresh pair of eyes, like a customer seeing your site for the first time.
When this attitude colours every stage of the process, there are no guesses, assumptions or prejudices. Opportunities are no longer missed or held back by a limiting belief that there’s no room for improvement.
An exclusion test is when you remove just one element from a webpage with a split-test. Many pages are cluttered up with icons, elements, images and text, all competing with each other. Many of these elements would, on the face of things, improve sales. Yet we have run tests where deleting one element such as ‘free delivery’ has seen revenue go up. As always, this presents an opportunity to learn something that you can apply more broadly. What does the outcome of an exclusion test suggest about your visitors and the buying journey?
When you find something that is bugging your customers, don’t be afraid to fix it properly. Bold tests are not just about creating a substantially different web experience; you may need to fundamentally alter an aspect of your service delivery, such as giving a stronger guarantee. Fortune favours the brave; big bold tests, informed by the evidence, have the potential to bring outstanding returns.
How many split tests can you physically run in a year? That will be dictated by factors such as your traffic levels, the complexity of the tests you want to run and the size and capability of your technical team and optimisation teams. Conduct a Testing Capacity Audit (or get in touch to ask us to do it for you) to find out maximum number of tests you can run, and aim to fill every one. This is a no-brainer. The more tests you run, the more you learn, and the more likely you are see your revenue per visitor increasing month by month.