How to create a prioritisation system for your CRO programme
Following a prioritisation system for your CRO programme may feel like you’re wasting valuable time where you could be testing, but just wait until you read the example below – it will make you think twice about conducting your CRO programme without methodically scoring and ranking test ideas.
Amazon’s recommendation engine, pictured above, is one of the most important inventions in ecommerce. A junior software developer at Amazon came up with this idea when he was standing in the checkout line at his local supermarket. When he initially presented it to his colleagues, it was rejected by a powerful executive who said it would cause customers to be distracted. Despite being told to park the idea, the developer pushed ahead and got a prototype ready for testing. The senior executive was upset, but had to allow the test to run. It won by such a margin that Amazon implemented it with great urgency.
One of the biggest inventions in ecommerce almost didn’t see the light of day.
This is what happens if you don’t have an objective system by which to evaluate and rank ideas, using a pre-agreed set of criteria.
There are many different prioritisation frameworks, and a range of criteria you can use to prioritise hypotheses, such as:
- scoring potential impact
- ease of implementation
- strength of evidence
- alignment with business objectives
However, it’s not so much exactly how you do your prioritisation but that you have a system in place to manage these competing demands.
Each test you run represents an opportunity cost. You don’t want to waste time and resources on tests that have little or no potential. You don’t want to waste time on tests that are going to take ages to build, or ages to run before a result is declared. You don’t want to waste time on ideas that will require a lot of effort to code when there are lower-effort ones with similar potential. You see, a prioritisation system will save you time and money.
How do you create a prioritisation system?
The simplest form is an effort-potential matrix. Determining effort is reasonably straightforward. You could ask your dev team to give you an estimate of how long it would take to code, or judge for yourself based on the level of changes required.
To determine potential, it helps to speak to users directly, either in the form of surveys or semi-structured interviews. Usability testing is another goldmine for clues about potential. You’ll quickly form a view about where the biggest potential is from a customer perspective. You could also look at the level of traffic exposure on that area of the site or page. For example, when working with a vehicle rental company, we noticed that less than half of their visitors scrolled down far enough on a key page to notice the high-margin upsells. Conversion potential was effectively halved, and this became a focus area for us.
At AWA we also consider how likely it is to make a measurable difference to revenue as opposed to some other metric of lesser importance, say progression to next page or bounce rate.
Interested in reading more?
This is mistake number six in a series of seven posts, outlining the mistakes that almost everyone makes in conversion optimisation. Keep an eye out for the remaining mistake over the next few weeks.
Mistake #3: Are you assuming your GA data is accurate?
Mistake #5: Does your CRO programme lack momentum?
Read our ebook below to learn more about how the three power metrics can make your conversion programme a success alongside your prioritisation system.
People from Facebook, FarFetch and RS Components receive our newsletter. You can too. Subscribe now.
Interested in turning experimentation and testing into an advantage for your entire business?