Interview with Head of Experimentation: Stewart Ehoff at RS Components
In the first of our series of interviews with leaders in Experimentation, we’re joined by Stewart Ehoff, Experimentation Strategy and Technology Lead at RS Components.
Watch the video to view the full 46-minute interview, or read the transcript below.
Stewart Ehoff bio:
As a naturally curious individual, I’ve found my true calling in this wonderful world of digital & business experimentation. My experimentation journey started at RS Components nearly 2 years ago and I now have the privilege of leading our newly branded experimentation practice function – a centre of excellence focused on democratising the power of experimentation. Though just a youngster in a field full of veterans, I have a strong background in software engineering and business strategy which has given me a unique and interesting perspective on the evolving digital world.
Dan Croxen-John (CEO of AWA digital): Welcome everybody to our interview with Stewart Ehoff who is the Experimentation Lead at RS Components, welcome Stewart;
Stewart Ehoff: Hi there Dan, thanks for having me on.
So what we are going to talk through is your journey to being Experimentation Lead at RS Components, but before you do, tell us a bit about the business what RS Components does, how long it has been going for and ultimately how you came to become Experimentation Lead?
Yeah sure, so a lot to unpack there, so to start with, who are RS, It is a question we get a lot and the easiest answer is we are probably the biggest company that you have never heard of, RS Components is part of the Electrocomponents Group and we operate in 32 global markets, with an annual turnover of just under £2 Billion, we are an electrical component distributor and also a manufacturer in global industry effectively. So we used to be a catalogue company back in the days of old, similar to the Argos catalogue, engineers would come to RS and would order their products via catalogue and we transformed in the digital era to primarily an ecommerce driven business.
So I guess where I fit into that is in the experimentation space is effectively making sure that our customer journeys are wonderful at grass roots really. I see Experimentation Leads in various other businesses with targets around KPIs. Our ethos and positioning really is to create a world class customer experience, the revenue is organic. You get terms like growth hackers and CRO, we just call it experimentation.
Specifically my role is Experimentation Lead, I came to this position through the technology transformation that has been going on over the last few years to cloud, microservices and also some of our business practices such as dev ops and agile and piggy backing off of that, has been the ambitious strategy I have put to the business, about a year and a half ago, to revolutionise the way we experiment to drive that via full-stack experimentation which I am sure we will perhaps talk about a bit later. So that is who we are and a little bit about how I got to where I am. The journey is obviously a little more comprehensive than that, but it gives you a bit of a flavour I suppose.
In your experience what makes a good Head of Experimentation, what qualities and attributes are essential?
From my experience Experimentation Heads can be quite multi-disciplinary. When you look form organisation to organisation, the most common Experimentation Head exists in the marketing arena and some that exist purely in the data arena and for myself I sit firmly in the engineering arena but outside of those, I guess specialism, you could say, I think to be a good head of experimentation lead within an organisation I think really you need to have a broad understanding of digital experiences fundamentally. From how the code works to user phycology, although way to design, implementation bits and pieces like that. Then being able to from the strategic side, create strong road maps and then I suppose have the interpersonal skills to relay the value and benefits back to the business.
Myself and anyone else in my team tend to refer to ourselves as Swiss Army Knives because to be in experimentation you need to be at that higher level of understanding of the digital experience. Like I said it was a very broad question, but those sorts of qualities I think would stand you a good chance in the role I think.
How do you explain your job to someone that is unfamiliar to the term Experimentation. You are in a pub or having dinner with someone and they ask you “What do you do?” How do you explain it?
I am quite a verbose person so I don’t tend to give half measures so I am guilty of giving the whole spiel if you will. But it is quite a difficult one isn’t it? Because most people are a little bit naive to the fact that big business operate primarily through experimentation and digital channels. If you look at Facebook, Amazon and Netflix, I think people know subconsciously that they are being experimented on, I think it is coming more to light in our social spheres, with documentaries coming out recently “The Social Dilemma” and bits and pieces like that.
Terms like CRO or growth hacking are probably the better terms to use, although I do not like to use those. The way that I explain it, I typically say “I am a software engineer who specialises in creating world class experiences for customers” is generally how I phrase it.
OK… and you mention that you do not like the term CRO, what is it that you do not like about that?
I suppose it is a bit of a controversial take. CRO is the general term that it is referenced as within industry. I think CRO is a bit of a marketing term and for me I don’t like it because it tunnel visions specifically on the conversion metric right, which is where we obviously want to get all of our customers to, but if you are tunnel visioning purely on the conversion metric, I think there is a chance and a danger and a risk that though through activities and optimisations that might run that are specifically driving conversion could give short term benefits from an organisation. If the conversion is your only focus, I think you lose sight of the long-term purpose of your business and what the problem is that you are trying to solve for your customers.
So I think for me, although conversion doesn’t have to be just customer completing that purchase, I think generally that’s how organisations that don’t typically have CRO people get a CRO person in, that’s how they see it. It is kind of this mixed perception of not just the people in the CRO arena but outside as well that I do not think tells the true story of what that role is trying to achieve.
And what is it that drives you and RS Components to experimentation?
For me on a personal level, my background is in software engineering. I started out as a junior developer years ago building WordPress websites, working agency side and a lot of the time when you are in that agency style place, you sometimes get requests from customers to do certain things, that, for the most part is based on experience or intuition and as engineer, as you are building this, you think, this is rubbish, why would you want to do this? This doesn’t make any sense, but when you are on that side of the fence you don’t have the jurisdiction to challenge the customer because the customer is always right. But when I moved into the experimentation function at RS components, I was able to tap into and channel that questioning attitude – Why are we trying to do this? what are we trying to achieve? being really data driven and hypothesis driven.
For me, it’s almost being that 5 year old and asking why, why are we doing this, what are we trying to achieve, asking lots of questions is quite intrinsic to my nature, so that is what I think I am very lucky that I have landed in this position, that satisfies my personal needs and my personal drive and ambition.
For the organisation, having someone or a dedicated function to challenge our assumptions, to challenge fundamentally what we conceptualise and question how we deliver the products and services to our customers is always going to be a beneficial thing. For RS Components specifically, the value and benefit we get out of experimentation specifically is twofold. The first is that we are able to recalibrate and pivot when we are going in the wrong direction. The second is that, when we do launch a winning feature, via an experiment and we get a tangible outcome based on that, that’s based on real world data, we can actually play back the value of that piece of data to the business to say “we put in X and we got out Y”, which in our world and in the world of software engineering where you have lots of moving parts, can be very difficult to do.
And what do you say to someone who works in a company where they haven’t adopted experimentation? What would you say encourage their leadership team to look at it and adopt it?
That is a really interesting question. I would probably start by asking how many releases have they done that they knew didn’t work for their customers? And if the answer to that question is none, then the next question is ‘ok is it none because you just don’t know, or is it truly none’ and I think that question might get some cogs ticking. I think that the real crux of that is, that you don’t know what you don’t know.
When you are working in big data and big E-Commerce, marginal percent differences can make a huge difference to your bottom line, so for organisations that are on a good path and have good user experience and what to take it to the next level and they are actively doing so. I live by the term, I don’t think anyone sets out to do anything with the objective to fail, right? But sometimes we just get it wrong, sometimes we just made the wrong decision or made a wrong turn and through the power of experimentation you can understand if you are on the right path or not. In a way it is cheating, it is cheating. You get to validate and confirm with your users, if the thing you are about to invest a lot of time on is actually worth doing or not. Although that is a very simplistic approach, we know as experimentation people that it is significantly more complex than that, particularly in an organisation of my size.
Fundamentally, if you want to be more data driven, if you want to protect your companies interests, and also protect your customers as well, you should definitely take a look into experimentation.
Stewart are you able to share with us an example of the success you have had, whether it has actually been a win or an unusual outcome or some example of where you have gained some critical insight?
Yeah, I have a really recent example of this actually. We have recently ran an experiment on our basket page which I won’t bore you with the details of the experiment but it was focused on trying to get our users, whether they were logging in and then proceeding to the checkout or to proceed to checkout as a guest user, what we wanted to focus on was, once you are at the checkout, we want to usher you along as best we can, to get you into the checkout flow. And this was part of the tactical piece of work that was primarily focused on saving a sprint and engineering time.
So, if the experiment paid off, the pay off for the business, would be about estimate 6 – 7 sprints of work that we wouldn’t have to do on engineering a particular solution. So that was everything we had to play for and we ran the experiment, saying that even if the net outcome of this experiment is neutral, as an organisation we have benefited in time and we can focus those efforts elsewhere. So surprisingly though the experiment that we hadn’t ran increased our basket to checkout progression by 12% so which was massive for the organisation and what was more interesting that cascaded down, not only from users going from the basket to our first level of checkout, but also for users then completing those purchases as well.
I have said for a long time, to anyone that I have spoken to in the experimentation world, that I think the greatest benefit of experimentation is opening up idealisation to the wider organisation because within that you are going to have golden ticket ideas that you wouldn’t be able to expose if you didn’t have an experimentation driven organisation. This is one of those golden ticket ideas that really is great for our product offering and it obviously is a big hit for our customers as well so you know, we are celebrating that success at the moment and we are in the process of rolling that out to some of our other markets and to do some further confirmation of the numbers we are seeing, but it has driven a lot of value for the organisation and it hasn’t take much time to do it.
Goodness 12% that’s something else! What mistakes do you see organisations who use experimentation making?
I think the big one for me is using experimentation as a validation vessel rather than what you consider true experimentation. I can give you an example of that, let’s say we had some senior leader in a business that says “hey we should build this thing as I have seen this other competitor do it and I think we need to have it as well as I think it will be good for our customer as well”
There is a lot of to unpack in that statement, but there is someone that got a notion an idea, but they are quite a senior position, the power and sway and says let’s do it. So it gets put into motion and into a sprint. A designer jumps into it and engineers jump into it and then they say here you, and during that design process and engineering process there’s a lot of micro decisions that those people are making. The designers might come up with 4 or 5 different ideas for that particular request that has come from a senior leader. Ultimately they have to commit to 1, so a bunch of designers get in a room and go “we have to make a decision here, what’s the best thing” and some says, “well I think it should be this and I think it should be this”. But they have to commit to 1 thing. And then the same thing happens in the engineering process, and the engineering process goes well we can do it this way or this way and again there’s a lot of decisions being made all the way through that process. At the end of that you say “now let’s put it into an experiment” and just validate that the one thing that we came out with at the end of all that work will do something positive for our customers.
That is not to say that if you run that experiment and you get a good result that’s a bad thing, because using experimentation as a validation vessel is better than doing no experimentation at all. But when we go back to the original point of the question, what is the biggest mistake that organisations make, is that in that process, there might have been a design or engineered solution that was better than the one we ended up committing to and we don’t know that, we now have lost that opportunity and let’s say we ran that experiment and it gave us a 5% uplift in the attribute we were looking at, well one of the other options that we were thinking of could have given us 10% but we have lost that opportunity and we will probably never revisit that again.
Then the final thing on that, who is to say that the original idea was valid in the first place, was it based on any data? Or was it just based on an assumption and a bit of a guess?
So what could have been a fluke rather than a data led hypothesis?
Sure yeah. Typically, actually I think the higher up you go in the chain, in my experience, the less data driven those decisions tend to be.
Sure. What does your team look like, in terms of size, structure and skill set, how do you organise, do you centralise, federate?
We are quite a small team at the moment actually, which you will be surprised to hear, we are currently a team of 2, scaling up to 3 next month, so we are a very small team and we are positioned in, what’s known in RS Components as the engineering team, and our team is not focused on the doing or the delivery of the experimentation, but we are focused on democratising the potential of experimentation to our wider teams. Within that we have some consultancy time, but our core principles are really around being set up as a centre for excellence. So we don’t do the doing but we empower others to run their own experiments within their areas. So, although we are a small team, we are scaling up, because as our experimentation expands across the portfolio our design and engineering teams grows so does the demand on our team, but over the last sort of year or so, we have set some really solid foundations with actually a really small amount of resource which really will be surprising to some and now that we have set that strong technology foundation with the small team, we can now think about scaling up.
Specifically the skills we have got on our team, within our experimentation practice function, we have the skills to run, to take any idea or anything like that, and run a full lens experiment, covering idea and hypothesis building, all the way to design, engineering build, QA, launch and then the data analysis and then whatever you do with those results. Our job now is really to educate the whole entire business to be able to do exactly what we can do in our function. We don’t tend to inject resource into the teams, we have more of a consultancy based piece, so if there is work that individual agile teams are not skilled enough to do they can get consultancy resource from our team to help on an interim basis but that is done as an educational piece, rather then we do it for you. We will help you along with it to help empower and build up our maturity as we go.
And how much visibility does experimentation get within the company? Within RS Components?
It depends on how you quantify how much, or what your measure is? What we tend to go off is what we consider to be hierarchical awareness, so how far up the chain does our results go, how well understood is experimentation up the hierarchy is generally our measure. Our visibility really goes up to board level. We are very lucky in that our CTO is very well invested in what we are doing in the experimentation arena. It is through the understanding and the need to drive our experimentation practices in the way that we are that secured us investment this year to re-partner with Optimizely, just as an example. That was sponsored by RC Suite.
In terms of our results, our results go into a really wide portfolio of stakeholders that don’t just exist in our technology sphere but also in our data, our products spheres, customer services and various other areas of the organisation. It is a growing process, and over time we are being exposed to more and more people within the organisation but from a height level if you will, we go all the way up and from a width level, that is what we are working on now.
In terms of velocity of your programme, how many experiments are you launching this year and what about next year and what is your technology stack, how do you manage all this from an infrastructure point of view?
About a year and a half ago, our experiment velocity was about 100 per year, maybe just slightly more, but that was under a very different experiment team structure, at that point we were 5 within the team and we were basically not a silo team, a centralised team if you like that did everything basically, we would do the whole thing and then we would hand over the results back to the business at the end of the process. We realised quite quickly that there are limitations to this process, we were ultimately hitting our velocity levels with the resources in the team and if we want to scale that up we just needed to have more experimentation experts to help deliver on those experiments. But also we realised there was a huge portion of work that was happening in the organisation that we just couldn’t touch with experimentation and it led to the realisation that really the only way that RS Components could ever truly be an experiment driven organisation is if everyone was able to harness the power of experimentation.
This has meant that we can change our approach and why in the short term our experiment velocity has significantly decreased, we have probably ran, in the region this year, a sum total of 25 experiments and generally those have been BAU mission critical activities that we are still supporting the business with as we increase our experiment maturity across our individual agile teams. But what we are really talking about here is, if we talked about the difference between 100 centralised client side experiments that would be built in house by one individual team versus the latent potent we have in the organisation of all of the things that we engineer and we deliver to our customers becoming experiments. So there’s going to be a huge paradigm shift as we continue to scale up our program particularly over the course of the next 6 months or so now that we have made a lot of our technology foundations, where our velocity is going to sky rocket, way beyond the 100 experiments we were able to deliver within a single centralised team.
You mentioned a bit about the technology that you use, can you expand on that?
This year we partnered with Optimizely, and we are using both the Optimizely client side and full-stack products. It is the full-stack product that we are really focused on. That’s going to drive a huge level of value and innovation in the experimentation pace within the business. The way that we are distributing that and getting that across to our other engineering teams is quite bespoke, so this is really where my engineering specialisms come into play but we have created, effectively a standardised wrapper for a lot of the Optimizely full-stack solutions, which enables us to distribute the experimentation capabilities to all of our agile teams technology stacks without them having to do anything. So out of the box, within their single page applications, or within their cloud based services, or what have you, they have got the experimentation capabilities.
Really we are focused on, going back to the velocity piece making sure that experimentation has as few barriers to execution as possible and from a technology perspective that is very easy to achieve because that helps us to drive experimentation buy in and the increased velocity for our entire organisation.
So given all of this, how do you measure the success of your program?
So we have got a wide array of metrics we have to measure success, the top level one, really is value in terms of revenue. That’s the big one that everyone wants to know, overall, how much money has your experimentation driven from the organisation. Underneath the higher-level metric of revenue are a lot of other metrics, that bubble up and contribute to that. We have four areas of other metrics and KPIs that we can use to demonstrate a viable programme.
The first is that high level value piece where we are looking at cumulative increases to conversion rate, overall revenue that we have driven through experimentation splits and various other value-based metrics. We then have our experimentation metrics, what is our inferred win rates, what is our velocity, as you have mentioned before. What is the proportion of ideas that we generate to number of experiments that are activated and a bunch of other experimentation specific metrics that we can use to assess how things are going in that area.
We have then got maturity aspects, which we have been working on with our Optimizely partners in getting KPIs and measurements set up that actually measure our organisation maturity in the experimentation space. Typically that takes the form of a questionnaire that we send around quarterly to each of our engineering managers and product owners to say, it is quite a long questionnaire, but it asks lots of questions around what resources do you have in your team, how many experiments might you be running, and asks a lot of operational stuff basically and we then score based on those answers, and then create an aggregated view of our overall maturity and that also helps us to understand the areas we need to work on as an organisation. What are our gaps, or our weak areas?
The last one is quite an obvious one, how many teams are actually enabled for experimentation and running the experiments. How many logins do we get to our platform per month and how many teams are onboarded with our programme management stuff and submitted ideas and bits and pieces like that. Those are our four metrics and overall, those three latter ones, will all bubble up to revenue and then we have a revenue attribution model that we use to ultimately pick out a final value statement that we can play back to the business to say; across all of our experiments and all the things that we have ended up implementing, here is the total value that we have driven.
So for example, how, in terms of time horizon, if you have an uplift from a particular experiment, how long do you continue to recognise that uplift for?
We use the Optimizely time horizon calculations, so could realise that up to the end of time, but the general time horizon we tend to use is 12 months, it will degrade from 1 month to 12 months over time using their time discount figures.
Which is more important, running lots of experiments or fewer but bolder? And why would you choose one over the other?
I would say that I would prefer fewer but bolder personally, I think the danger you have with running lots of experiments is that within, I think chasing quantity is a bit of a falsity if I do say so myself, within that quantity and there quality ideas? Do we have a good quality problem statement? a well formed hypothesis and as much time as we can to exploring that problem, thinking about all the different ideas we could solve that and then using all those different ideas as to different ways we could run an experiment to ultimately increase out win chances. I am sure you know this yourself Dan, the more variations you put into an experiment, the higher chance you have got in finding a winner.
When someone says to me that they are running lots of experiments, I would imagine that they are quite quick-fire things, probably small changes, and probably with only one variation. That is the assumption I would make if I heard that. Whereas you said bolder experiments, bigger experiments, they have got higher potential for learning, higher potential for impact, and in my view it is those big hitters and those big transformational pieces that actually get people excited and get people involved in the experimentation process as well, so my preference would be fewer but bigger and bolder.
How do you define bold?
I would say for me it is how much does this go against the status quo of what the organisation does, I think there are lots of great examples that I have heard of in the past of really bold experiments that have really actually fundamentally changed the way that the business thinks about its product offerings and solutions and what it is actually doing for its customers. It is a very hard term to quantify, you are almost saying, how brave are you being? What is the level of risk associated? how much are you fundamentally changing about the customer experiences and what’s the central gravity of that change?
It is a hard thing to define but I would say bold and innovative probably go hand in hand. How innovative is the solution and what is it driving that we probably can’t even realise today.
And if you were looking from a company from the outside, how would you know if they had a culture of experimentation?
This is a really easy one actually, I see this a lot and we have been guilty of it in the past. For me it is the difference of celebrating outputs versus outcomes. Experiment driven organisations don’t necessarily celebrate when they have delivered something, they celebrate what it delivered for the customers and the business.
This isn’t to take away from hardworking people and design and engineers the need for them to celebrate the “hey we got something across the line and we launched it” because I know how good that feels, but equally it must be devastation to launch something and for that to then have a negative impact on the business and that is certainly not something anyone wants to celebrate. I think the big difference, from the outside looking in, an experiment driven organisation will always celebrate the outcome not the output whereas non experiment driven organisations will say “hey we launched this thing, now let’s move on to the next thing” and they will probably not look into or introspect on what that thing that they actually delivered did. If they do, it will probably finger in the air stuff, because they didn’t launch it in a controlled way, with a controlled group that they can actually measure the impact against.
Interesting answer. Who do you admire in the world of experimentation? What individuals or companies?
The big one for me is Professor Stefan Thomke, which is probably a very popular answer actually. I am not much of a reader myself, but his book ‘Experimentation Works’ was an absolutely delightful read, I have also seen him speak a couple of times at conferences and stuff like that, and his energy and enthusiasm is just palpable. He just loves this stuff; he is so passionate about it. I think for me I have a lot to learn from him there’s a lot I would like to mirror in his demeanour, his wealth of knowledge and the way that he talks and delivers his message to people is just amazing. He has been a really influential person for me.
Another one, is quite early on in my experimentation journey, I met at a couple of seminars is a guy called Jon Noronha who is vice president of products at Optimizely, he used to work for Bing as product owner on their product search. He has done a couple of papers and seminars on his time at Bing and what he has learnt and how he has taken that into his experimentation journey and just learning from someone who has been in that position and made the mistakes and then so honest in owning up to say you know what we did this thing and it was awful, it was so bad, but we learnt from it and we calibrated and we drive it in a different direction. For me quite early on in my own experimentation journey, that message was very powerful. I had the pleasure of meeting Jon Noronha earlier this year and we sat down over lunch and I told him exactly what I thought and I think he will continue to be a good influence around me as time goes on.
If you were coaching a young person, maybe even you some years ago, what advice would you give that person wanting to get into this area?
It is a tricky one, because I guess I have never considered someone jumping into experimentation as a first step in their career almost, because to me it is such a complex area, you know, there is so much to it and so much you can go into. I guess for a young person who was really interested in getting into this kind of stuff, I would almost advocate for looking at a specialism – What part of the experimentation journey are you most interested in?
Generally you tend to find people in 3 areas, the beginning part around thinking of ideas, conceptualising and problem solving for those ideas and building hypothesis, people are really passionate about that, that is where you find a career really easy to find. Then you have the middle bit which is perhaps the design and the build, so perhaps the UX principles, engineering principles and bits and pieces like that, and then there is the other side which is the data analysis. Once we have done the experiment, how do we certificate of significance and how do we look at using that experiment data to drive further activities.
If I was talking to either a younger version of myself, or someone interested in getting into the experimentation game, I would say focus on an area, a single area for development, because you can’t do it all at once and you can’t learn it all at once because it is so broad. Whatever it is that you are doing now, think about how you can apply the core principles of experimentation to what you are doing. How can you be more data driven about your activities as an engineer or as a marketeer, whatever you are doing. How can you be more data driven, how can you drive forward those principles, how can you influence the people around you to competent in those principles as well.
One thing I didn’t mention when we spoke about what does it take to be a good head of or lead in the experimentation world is actually I do think you need to have a very good level of influence, if you are not inherently a charismatic or influential person, you are going to have trouble within your organisation getting other people to understand what it is you actually driving. Because, the reality is, the experimentation industry just isn’t there yet, it is not bread and butter like agile processes are almost like dev ops. Experimentation is not well known, as an influencer how good are you at getting that message across and getting followers within your organisation to buy into your principles.
How do you explain the difference really with an experimentation culture and one that is running lots of tests, so the velocity might be the same, but they have a different culture, or one has the experimentation culture and one hasn’t. How would you explain the differences between those two businesses and how they arrived at that point do you think?
The most important thing for me through experimentation is not looking through the lens at how many experiments are we running and how much money is that making us but how much are we learning from those experiments and then using the insights from those experiments to then calibrate new experiments that can actually drive business value. I do see this a lot actually where organisations will focus on experimentation velocity and saying “god we are running 100s of experiments and it is fantastic, loads of tests and experiments” but in those 100s of tests that they are running they might be re testing the same thing three times in three different areas of the business, they don’t know. Generally, the theme I see there is not taking the learners of those experiments and using them as a primary data source in the calibration of future activities that they do.
For me, and I can’t remember who said it, and if they see this video I apologise for not crediting you, but I think it might be someone from Booking.com or similar but they are quoted as saying “To be an experiment organisation, what you are actually becoming is a learning organisation. You are constantly learning and coming to work every day being ready to face the reality of just how wrong you actually are in your assumptions”. So for me, the big difference is, you can be running 100s of experiments a year and more power to you, because I know how much of an operational nightmare that can actually be, but if you are not taking the learnings of those experiments in in a practical way, disseminating those learnings to the business and growing and evolving as an organisation in line with those learnings, you are missing the point of experimentation in my view.
What development do you think you might see in the field of experimentation say in the next 3 years?
I think when you ask most people this question, the first thing that pops into their minds are the future of technology, like AI and machine learning and automated experiments and loads of cool things, it is like when we talk about cars and people ask what they will look like in 50 years, oh yeah it is like going to be flying cars, it is like what we have already got now but better! But I think the reality is, is that technology is moving so fast it is actually very difficult for us to conceptualise what is going to happen in 2 or 3 years’ time and what the experimentation space looks like.
One thing I know as of right now is that experimentation is such a human thing, although we use lots of data points, and we use AI and machines to help calculate statistics and help us to make sense of the data and the numbers, fundamentally experimentation is a human based principle. We are applying statistics and scientific rigor to solve human problems. Computers can never understand what those problems are. The articulation of what is the problem and what is it that we are going to do to solve that is not something that can be automated in my view. At least not in the realms of technology that we have today right!
But one thing I do know, is almost truly certain, over the next 2 or 3 years the practices and the principles of experimentation will become a much broader industry standard and that is not necessarily exclusive to big tech companies and large enterprises. I think as time goes on, such as the way of agile, agile has been around for a long time, but it really has become an adopted and well known and well understood best practice in organisations of any size, so too will experimentation be in that same place.
So some of the audience who will be watching this will be from smaller businesses, who might look at the size of RS Components and think they just don’t have the level of traffic to test at the scale you are. What do you say to them?
I would say that through the lens of specific digital experimentation and actually running those types of activities, yes there are some benchmarks you have to meet in terms of traffic and bits and pieces like that in order to get a significant result but I think really for me, even if your traffic numbers are quite low, you can adopt the principles of experimentation and apply them to any type of business context and just make sure that the decisions you are making, you are using the data sources available to you.
Whether that be customer feedback or any other digital data points that you might have. Is your decision making purely based on ideas and just something that pops up into your head one time or is it driven in a similar ethos in your senior leadership or are the things you are doing are they truly based on sound principles of data and any other types of information sources you have available to you. The other thing is actually small organisations might not have the traffic numbers, they might not have the numbers in order to achieve the gold standard 95% statistical results of significance but they can still apply experimentation to at least get them somewhere close to that mark. For me, statistical significance is a real risk factor, we are saying 95 out of the 100 times we run the experiment we are going to see this result.
If you only have the traffic to see 75/80 statistical significance, that’s still kind of a good indicator. It is less good then 95% of course, but that’s not to say that experimentation isn’t the right thing for you just yet, because if you can use the practices to drive your business forward, in time you will achieve traffic numbers in time to mature.
Dan Croxen-John: You are absolutely right, and it is a question that keeps coming up from smaller business that we talk to. Stewart you have been an absolute dream to interview. Somebody who really has seen experimentation from the ground up from a very interesting engineering led start and you have shown great eloquence and great erudition in your answers and I am really keen to share this interview with our audience. So, thank you very much and have a good day.
Stewart Ehoff: Thanks very much for having me, cheers.
Is your CRO programme delivering the impact you hoped for?
Benchmark your CRO now for immediate, free report packed with ACTIONABLE insights you and your team can implement today to increase conversion.
Takes only two minutes
If your CRO programme is not delivering the highest ROI of all of your marketing spend, then we should talk.