Skip to main content

Interview with Experimentation Lead: Andre Richter at Just Eat

We’re joined by Andre Richter, Experimentation Lead at Just Eat

Watch the video to view the full 33-minute interview, or read the transcript below.

Andre Richter bio:

André Richter is a data science manager with a focus on data-driven decision-making at scale. He leverages his economics background to help companies understand and generate value from their data. André currently leads the Experimentation Team at Just Eat, where his team helps to evaluate the impact of product changes on customer behaviour. Previously, he worked in aviation, where he led a data science team building tools to automate insight discovery and to drive efficiency gains. André holds a Ph.D. in Economics from the Swedish Institute for Social Research.


Dan Croxen-John (CEO of AWA digital): Welcome to our series of interviews with experimentation practitioners in large well-known companies and smaller, ambitious ones. I’m here today with Andre Richter, the Experimentation Lead for Just Eat, part of Just Eat, welcome Andre.

So we’ve got a number of questions for you, and one of the first ones is; tell us a bit about Just Eat and your role as a data science manager and some of your responsibilities.

Andre Richter (Experimentation Lead at Just Eat So I hope you know what Just Eat is. It’s the largest food delivery app outside of China. If you’re hungry, if you want to snack, Just Eat are here to help you to find the right food near you. My role at Just Eat is the experimentation lead. So I’m leading a team of analysts and data scientists to help to design and analyse experiments for a lot of different stakeholders.

Thank you so much and Andre, tell us a bit about your journey to become an experimentation leader at Just Eat What were you doing before you started this role?

So my journey really started a long time ago in academia. I used to be an academic researcher in economics. We did a lot of social science research and a lot of the focus in this line of research is trying to understand how to use data, observational data and non-observational, with real world customers, where users were people, and how they are affected by certain changes in the environment. A big focus there is how to draw insights as If those changes had happened like an experiment. 

This really gave me the background in terms of thinking about data and thinking about experiments, thinking about how to use data in a smart or innovative way.  While I was working as a researcher on this topic, I got more and more impressed by how data science transformed the industry and how it really transformed businesses. 

So eventually I left the academia to work as a data scientist and whilst I was working on my machine learning algorithms and my deep learning in my new networks on pricing optimization and routing algorithms and all those topics, I started helping colleagues set up A/B tests in a slightly different way than they were used to and that was very well received. And over time I helped more and more with this kind of stuff and that’s how eventually I became more and more specialised in this area and then I saw how Just Eat was looking for someone to really elevate the experimentation capability. I thought it was a really good match and Just Eat felt the same. So that’s how I became the experimentation lead at Just Eat

Great. And you’re not the first person of head of experimentation I’ve spoken to who has come from an academic background. So what makes a good experimentation practitioner, what sort of qualities or attributes do you think are essential?

I think for me the key is really the ability to wear quite a lot of different hats at the same time. 

So, you know, I’m leading a team of analysts and other scientists that help a really wide range of colleagues with their experiments and they are all from different business units, such as product, marketing, engineering, logistics, operations etc. They all have slightly different cultures, like a different language and different familiarity with data. 

So in my day to day work I really act as a consultant but also as the subject matter expert as an educator, an evangelist sometimes or a change agent and that is in addition to my actual team lead role. I have a team of eight reporting to me and we own the reporting framework, which is a big pipeline. It really automates inside discovery for us. So we don’t really want to have hundreds of product analysts sinking in hours and hours to analyse experiments, we want to automate and standardise as much as possible. 

This includes tracking the data, transforming data, deploying and developing the methods we use and to help analyse experiments that scale. This part of the job really combines being almost like a product owner and a product manager, but also researcher and obviously the people management aspect and the coaching aspect. So you really have to be a generalist, to be an experimentation practitioner.

So wearing many hats is fundamental to your role. How do you explain your role to somebody who’s unfamiliar with the term experimentation?

I would say, I really help people understand if any changes that they are making are steps in the right direction or not. So let me give you an example to make this really concrete, for instance, at Just Eat we are personalising the content of what customers see to highlight the food options most relevant to each and every individual. 

So, the personalisation team is a big stakeholder of mine and when you open the Just Eat app, you see restaurants in a certain order or cuisine types in a certain order and this order is personalised to you and to you only, to show you the food most relevant to you. 

So, if you have in the past year only ordered from the vegetarian restaurants, this type of food is probably a bit more relevant to you, than say the popular kebab around the corner. So, you know, that’s how you’re personalising the content. 

And this team works really hard on the cutting edge and complicating state of algorithms to figure out how to do that and they are often trying out new, really hot off the press algorithms and when there’s a new one coming out, they come to me and say, “Hi, Andre, we want to test this new algorithm and pitch it against the current one”, and my team then helps make this comparison. 

So you’re using one set of customers for the old algorithm and one set of customers on your new algorithm and you define the metrics how do we really measure success in here? What does it mean? That this algorithm is better than the other one? And then we help with the implementation details and with the analysis details and I think that pretty much sums up my job.

What is it about your background, your interest that draws you towards experimentation?

Me personally, I think experimentation is really at the heart of innovation. It really allows you to know if your innovation is a step in the right direction, maybe rather step to the side or maybe even backwards. So in that sense, it really settles debates if you are unsure about what you’re doing and enabling colleagues and the whole functions with this capability. That’s just really appealing to me. 

It also has a nice side effect as well. I mentioned to you earlier, that I got drawn to data science because I saw how transformative it was for different businesses. I realised over time that in proper experimentation infrastructure, it really enables a lot of data science projects. Think of personalisation again without a proper experimentation infrastructure, all your personalisation efforts would be much more difficult. So in that sense it has this nice side effect that by providing this experimentation infrastructure, I also enable a lot of data science projects that are helping the business.

I really like the statement that experimentation is at the heart of innovation, and my next question is about what draws the organisation, Just Eat, towards experimentation. What’s been their journey in accepting and bedding and promoting this as a way of making decisions or changes to the business?

Just Eat is an incredibly customer centric company in a really competitive market. So we’re constantly trying to innovate and to improve to deliver the best customer experience possible. How do you do this? Through constant experimentation. Experimentation isn’t something that you do. It’s how you do it. It’s a way of working and as experimentation is essentially a checking in with your customers to see if a proposed change works for them or not. That’s the go to approach, the forward way of working and trusted. I think this kind of strong link between our customer obsession almost and using experimentation as a way to get their feedback that really persuades a lot of people that are trusting you to live this experimentation culture that they have.

I’ve done a lot of research on the big, big companies like Amazon, and from looking at their journey towards testing and experimentation and both organisations, from the research I did, took about 7-8 years to get there. So from Just Eat’s perspective, from the point of view from launching a sighting or app, how long did it really take them to adopt this as a way of doing things?

That’s a very good question. I think in different business units the experimentation maturity is different. There’s a different perceived benefit of experimentation is different, So in some business units it was much faster and some others it was a bit slower, which is all to do with resourcing and different tactical setup

So there’s a different patchwork of different levels of maturity across the organization?

Yes I think every single part of the business is experimenting because we all want to make this a driven decisions, right? But the technical set ups has a different maturity, which means in some areas it is much easier than in others. 

Think of standard A/B tests, where you simply compare one set of customers to another set of customers who prepare the means. This kind of standard A/B test that you have in conversion rate optimisation a lot, that’s just really works smoothly, that is like on a conveyor belt in a factory that really works and it’s easy to do. But in some other parts of the business,  think of couriers so restaurant sides, it is much more difficult to do because it is a little more intricate how to set up the experiments and analyse the experiments so it requires a little bit more effort to advance experimentation there.

One of the questions we often get asked, from those people that are new to experimentation, or thinking about it. They say, I haven’t adopted experimentation in my company. Should I and why? What would you say to those people?

Obviously, since you’re talking to the experimentation lead, I just need to answer this as, Yes, you should. What are you waiting for? But I can give you a bit more of an elaborate answer here. 

My way of thinking is really about two different topics, so when I think of experimentation and the value it brings to businesses; there are two different elements. 

One is experimentation really is a safety net which is about risk mitigation. 

The second one is as a learning opportunity, which is all about making steps into the right direction eventually. 

Let me talk about experimentation as a safety net first, and here I like to think of what the large companies like Microsoft, LinkedIn, Netflix, Facebook, Amazon, Google, you name them. These companies run hundreds of experiments on any given day, and you can imagine that behind each and every one of these experiments is a team of people who went through the entire development cycle. So there was a product manager involved, there was a user researcher involved, there were designers involved, Q&A and so on. Yet Microsoft, for instance, tells us that about a third of all their tests have a negative impact. So instead of improving the customer experience or whatever else the experiment was meant to improve, they see a decline.

Similarly, Netflix says only about 10% of their tests are successes, and 10% is a number that pops up quite frequently in this kind of domain. 

The big conclusion there is, you really need to be humble and anticipate that some of your ideas may not work at first, and some may even have a negative impact. And this is really the opportunity of experimentation for risk mitigation. 

Catching the futures of changes that would make something worse is really the big success for experimentation. If the company did not experiment at all, you would ship the successful features, you would ship the features that don’t make any difference. But you’d also ship the negative features, the bad apple. It’s catching the bad apple that really helps a company mitigate and derisks them. 

So in that sense, I would ask the person who hasn’t adopted experimentation yet, how much risk are you really taking that you’re not aware of? And if you think of your last 100 features that you shipped or 100 changes that you shipped, if only 10% had downside to it, which you being willing, would you be comfortable to tell your shareholders that you’ve accepted this risk without any kind of way of controlling it, or knowing how much that impact was? 

Now if that thought makes you feel uncomfortable, experimentation is something we’d think about.

I have also talked about experimentation as a learning opportunity. We need to anticipate that some of our ideas don’t quite work at first. But thanks to the data that we are collecting in an experiment, very often we may be able to see what exactly didn’t go right about it. 

That means by simply twisting and tweaking an experiment or like a variant, we can turn an experiment was initially maybe having a negative impact or no impact at all, we can turn this into a success, and this is the big opportunity for learning from experiments. How to turn more of your ideas into success stories. So if you haven’t adopted experimentation, how much of your last features, which may not have had any impact at all, how much upside could you have realised there, that you’re currently not realizing?

Andre, are you able to share with us an example of the success you’ve had, whether it’s a win or an unexpected outcome or gaining some unique insight from a test or experiment you have recently run?

I can’t really talk much about the content of particular experiments, but I can tell you that for the organisation as a whole, a really big success for my team and organisation was just to bring a more advanced tailored perspective to experimentation. 

So you know, my impression is often when you talk about experimentation, it is often perceived to be an engineering project, and engineering aspect is really important because it just needs to work and easily and smoothly. But the thinking there is often that the final outcome is a basic A/B test. That kind of neglects what is possible. 

So think of it this way, if you’re learning a new programming language, say Python, I think the very first thing that you learn is how to print out hello world to the counsel. From a data perspective, this basic A/B test is really the hello world example of the experimentation world. It is really important to get this one right, but it’s not the final end point. It’s a starting point. And just that bringing more advanced statistics and more advanced analytics to the company, this has really helped a lot and our tests are much faster and more sensitive and our insides get more dramatic. So that was a really big win for us.

Clearly Just Eat has a huge number of users and visitors to your site and orders. But for those businesses that don’t receive that volume of interactions between visitors and their users, how do you scale beyond A/B testing to make experimentation more broad, and have a broader impact?

When I look at the academic literature, I see a lot of tests with very few users or participants. So I’ve seen papers that had six participants and obviously those tests are difficult to sell because they’re probably very underpowered and you don’t know how generalisable their results are, but they’re a very good starting point. 

So if you have available traffic, if you have few users, you can still do experimentation to get some data, because some data is better than no data. That can help you make data driven decisions, even if the data you have may not be perfect. But even the imperfect data might be better than no data at all. So I think low traffic is not an excuse for not doing experimentation. There are tradeoffs, there are caveats, but I don’t see this stopping you.

What mistakes do you see people or organisations making with experimentation?

I think the biggest issue is really when people are not learning from experiments. So experimentation is really a powerful approach, right? Almost like a mindset. It ideally helps you inform your product roadmap. Sometimes people treat experimentation more as a validation tool and not learning from it. if it doesn’t let you validate the ideas, and to be clear, there’s room for that. So if you are an engineer and you wanted to do a ‘no harm’ experiment, which is where you simply change something, but you don’t really expect any impact on the customers or whatever you’re working on, but you just want to make sure nothing breaks, this is a ‘can do no harm’ experiment. You just run them and that’s fine. But from a public perspective, you really want to make sure that you’re learning from your experiment and have a more nuanced understanding of your customers after you run the experiment.

What does your team look like Andre? You mentioned a team of eight, but what roles do they have and what skill sets do they have to do experimentation within Just Eat

There are many teams that we are working with, so one team is the tracking team, that we are collaborating very closely with that really makes sure we’re seeing what’s happening in the product. 

So, the tracking movement in the experimentation and these people are the eyes and the ears of Just Eat, without which we wouldn’t know what’s happening and how people are interacting with our product. All the GA stuff, the real time analytics live in that team. 

The core experimentation team consists of analysts, engineers and data scientists who build data pipelines and data products so the automating insider discovery. 

So, all the Advanced Analytics statistics that cause the machine running sits here. But it needs to be said we are also collaborating very closely with the wider team. There’s a lot of different product managers who are very good and very data savvy. We have an amazing community of product analysts who help a lot. We have really good machine learning engineers. And the entirety of tech really.  

How much visibility does experimentation get within the very top echelons of Just Eat

I think we have a really strong visibility. We have a whole range of outlets, very prominent product meetings. We have all hands for when results get presented. We have a dedicated experimentation forum, we have company wide newsletters, where either the start of experiments or the results of experiments get highlighted. I think we were pretty visible up to the highest level.

Roughly how many tests and experiments would you run over the course of the year? And what technologies are underpinning it, is it service side or client side? Have you built your own experimentation platform?

I think last year we were running roughly about 300 experiments. We are really using all kinds of technologies that are available to us, full stacks, service side and client side. We have our own in-house platform, but we also use third parties like Optimizely. For marketing we use Praise and say, it’s for us. But we are also building our own in-house platform, and then we also have other bespoke solutions. So it really depends on the use case, but we’re using everything at our disposal.

Why would you build your own experimentation platform as well as use something like Optimizely?

You know, every third party that you can think of has upsides and downsides. So there are always some use cases where the third party solutions may not just catching form and If you build your own in-house solution, you can really cater to all use cases that you want to cater for.

So not either or, it’s ‘and’?

It is ‘and’ yes!

What are the KPI’s the business uses to measure the success of your program of experimentation?

I think the big one is obviously the number of experiments that are run. I think this is the dominant KPI. 

But for me, I also care a lot about how easy it is to run experiments and how many business units we’re enabling to run experiments. In particular standard A/B tests should really be effortless. But if you look at different business units, say courier. You have courier networks and the networks might be affected by what happens to other couriers. So you can’t really run simple A/B tests there. You need to set those things up in a slightly more elaborate way. How easy is it to run experiments there? Or how easy is it to run experiments on the operation side? So those are some of the things that I’m looking at.

Is there a way that you can put a number on the ease with which tests could be launched? Is it time to launch?

It is very difficult. It is not super easy. That doesn’t mean you can’t attempt at getting a number. I’m not claiming that I have the perfect solution, but I have something.

I think I know the answer to this question, but I’m still interested. Is it more important to run lots and lots of tests or perhaps fewer but bolder and why would you go one over the other?

I think what you’re asking here is about the content of experiments. I normally don’t really get too much involved in the content of experience. You know, we have a really amazing community of product managers, engineers, developers. Researchers and designers, who specialise in the problem itself. So they do nothing else but think really hard about what problems, the user might have and how we could make the life of the user easier and they decide what we should be testing and experimenting on and I personally think you should really run as many experiments as you need to solve whatever customer problem you are trying to fix. 

So again, I’m not getting too much into the content decisions, I’m more getting into the capability. For me, it’s all about data driven decision making. Again, experimentation isn’t something that you do, it is more like how you do it. It’s a fundamental way of getting data that allows you to make a decision in a data driven way. And in that context, if I were to translate your question into the data driven decision making frame, your question would almost be ‘should we make lots of data driven decisions or fewer but bolder? I have a hard time answering that question. For me you should make as many data driven decisions and as many informed decisions as possible.

Thank you. And for you when you look at a company that you are not working in, from the outside. How would you know if they have a culture of experimentation? How could you tell from the outside?

I think if you try out a product and to pay attention to it, you should see it evolve over time. In particular, If you have some sticking points, you should see that the sticking points will improve and your experience should improve. But that’s not conclusive proof that this company has an experimentation culture. But it is a data point that might hint that it is.

Who, Andre, do you admire in the world of experimentation? What is it about those people that interests you?

I think the people that come to mind immediately would be Ron Kohavi and Lukas Vermeer, but also Stefan Thomke. Ron and Lukas have really helped the world of experimentation by sharing so much of the knowledge and expertise like the numerous articles that they’ve written in relation to experimentation at Microsoft and have really been exceptionally helpful in accelerating teams on their own journey. 

I can’t stress enough how much value Ron and Lukas have created just for being so transparent and open about what they’ve done and how they’re done it. So that’s really been super helpful. 

Stefan Thomke’s books and articles have really helped on the culture side of things. So I think if you have an executive who is not quite convinced yet after being exposed to Stephen’s work, he’s brought forward some very powerful arguments there which have helped.

If you were coaching a young person and he was keen to get into this area. What sort of advice would you give them?

I would say really adopt the experimentation mindset, which is a mindset of constant learning and improving. So you’re going to touch on so many different domains, both technical from the data perspective, but also business domains that can easily be really overwhelming so just keep a mindset of constant learning and constant improving, that will help you learn and master the area.

How would you explain the difference between a business that is really taking experimentation very seriously and one that was just running lots of tests? What would be the difference?

Let me start by saying that there is room for simply running tests. If you think of the typical engineering angle where you want to do no harm tests, you simply want to run tests to see that nothing bad happens when you deploy your change? There is nothing wrong with that. 

But if you look at the kind of product and customer centric experiments, I think the big thing is that you do not presuppose the outcome of your experiments, if you have a proper experimentation culture. Let me give you an example. Imagine someone comes to you and says, Hey, we need to run this test now because we need to go live in a week. Now, this scenario implies that you are anticipating that the outcome of the test will be positive and that then, after you have cleared the hurdle of running an experiment, you can go live with your idea. So, this is using experimentation as a validation tool of some sort. I think this does not take into account that the outcome could actually be something else than a success.

In an experimentation culture, you release, it’s really a testing ground culture. You start with the problem space. You have the device; you have this part of an idea. You come up with 3-4,5 different experiments in that theme that you are working on and then you run experiments and you see what the outcome is. And then whatever the outcome is of these experiments; you should have a more nuanced understanding of your customers. And with this more nuanced understanding, you go back to your product roadmap and decide what to do next. So only after you see the results you decide if it should go live or not.

Finally, Andre, what developments do you think we might see in a field of experimentation in the next say three years?

As I mentioned before, I’m coming at this from a data perspective. My data science perspective which maybe a bit more different from your usual audience. But I think there will be more and more advanced statistics and calls for inference and machine running you will see running in combination of these areas. Just because it can add so much value. 

You can run your tests faster, can highlight insight’s much more dramatically, in a much more data driven way, and it really helps you accelerate your experimentation programme. 

I think personalisation will continue to be a strong field; it has been strong in the last couple of years already. But I also think you will see more and more quasi experiment techniques. So those are the techniques I mentioned at the beginning where you are using observational data instead of an experiment. We are already seeing some companies write more blog posts about those things and I think those techniques have become one more proliferant.

Andre, thank you very much for your time. It’s been fabulous talking to you and I’ve learned a lot. And some of the pithiness of your statements has been great.

Is your CRO programme delivering the impact you hoped for?

Benchmark your CRO now for immediate, free report packed with ACTIONABLE insights you and your team can implement today to increase conversion.

Takes only two minutes

If your CRO programme is not delivering the highest ROI of all of your marketing spend, then we should talk.