Skip to main content

AB Testing and User Research: Synergies for Success

AB testing and user research: synergies for success

Improving product experiences requires both creativity and analysis. A/B testing provides data-driven insights into what resonates with users, while user research uncovers deeper human motivations and needs. 

Used together, these approaches create a powerful synergy that elevates product design. 

In this article, we will explore how AB testing and user research can work hand-in-hand to create delightful digital experiences that truly connect with customers. 

By leveraging the strengths of each methodology, product teams can make informed decisions rooted in both qualitative and quantitative evidence. This leads to products that balance artful design thinking with performance optimization – the best of both worlds. 

We aim to demonstrate the synergies between these disciplines and empower teams to build human-centric products powered by data. When A/B testing and user research work in harmony, the possibilities for better products are endless.

Understanding AB Testing

Benefits

1. Quickly test hypotheses and ideas

A/B testing enables a rapid iteration cycle by allowing teams to test hypotheses and new ideas quickly and easily. Rather than fully building out features before validation, small changes can be deployed as A/B test variants shown to a percentage of users. 

Within days or weeks, data reveals how users respond to each version. This means novel concepts can be tried out and honed without long development and release cycles.

For example, if a team has an idea for an improved onboarding flow, they could create a simplified prototype version as an A/B test variant. By showing it to 10% of new users, they quickly gain real-world data on its performance compared to the current onboarding experience. 

There’s no need to engineer a fully polished onboarding flow before testing the hypothesis.

A/B testing shifts experimentation upstream in the development process. Teams can validate that users even want a proposed feature before dedicating resources towards a full build.

 Concepts can be rapidly refined based on user feedback until they are primed for scaled implementation. Rather than big, risky releases, ideas are systematically tested and iterated on with the safety net of A/B testing.

2. Optimize key metrics like conversions, engagement, and retention

A/B testing is a powerful way to optimize metrics at every stage of the user funnel – from initial engagement to long-term retention. By testing different variants, teams can isolate and address weak points that are impacting key metrics.

For example, A/B tests can pinpoint where users are abandoning during the signup flow and test revised layouts, copy, or visuals to reduce dropout rates. 

Further down the funnel, tests might explore ways to increase conversion rates from free trials to paid plans. Messaging, pricing page design, and calls-to-action can be experimented with to determine what motivates users most.

Beyond conversions, A/B testing applies to engagement and retention as well. Tests might reveal which onboarding steps are causing early disengagement or how notification frequency impacts long-term retention. Iterating based on these learnings compounds over time.

Specific metrics like click-through rate on email campaigns, viral sharing of content, time spent in the app, and number of repeat purchases can all be improved through relentless A/B testing. 

When fueled by key metrics, A/B testing becomes an engine for perpetually optimizing the customer journey from first touch to loyal brand advocates.

3.  Iteratively improve through continuous small tests

A/B testing enables incremental optimization through many continuous small tests over time. Rather than periodic massive redesigns, products can evolve through minor refinements that gradually compound.

This starts with understanding the core customer journey and identifying areas of friction. These become hypotheses for iterative tests – e.g. will moving this button increase clicks? What if we change the copy here? Tests introduce small variations, measure impact, and keep or discard the changes based on data.

Over months and years, these minor tweaks add up to significant improvement. For example, an e-commerce site might first test the prominence of search bars, then checkout button colours, and then related product displays. Each test builds on the last to drive more revenue.

A travel app might iterate on its onboarding flow by testing tutorial placement, reducing steps, tweaking language clarity, and highlighting key features. Together these micro-changes create major refinement.

With continuous incremental testing, products evolve naturally over time through constant optimization rather than stagnating between infrequent redesigns. Users enjoy a smoother journey while product teams build knowledge.

4. Provide data to support decisions

A key value of A/B testing is generating definitive quantitative data to inform product decision making. Rather than opinions or hunches, test results offer concrete proof of which ideas users respond to best.

For example, a team might debate a homepage redesign but be unsure which layout performs strongest. A/B tests quickly provide data revealing which variant converts more users. This evidence quantitatively guides decision-making.

A/B testing data can powerfully shape product direction over time as well. For instance, a series of tests on signup flows could indicate two-step signups convert 30% better among key personas. This data signals product priorities to invest in streamlining signups.

Tests might also reveal that certain UI patterns or layouts resonate across different product surfaces and screens. These consistent data points can inform design system standards.

In essence, A/B testing helps teams align around “what works” based on user actions rather than assumptions. With concrete data, decisions are anchored in evidence that gives executives confidence. Data transforms discussions from opinions to informed tradeoffs backed by facts.

5.  Scale testing to large user bases

A major advantage of A/B testing is the ability to easily test changes with very large sample sizes by exposing variants to percentages of live user traffic. Whereas user research may provide qualitative insights from small groups of 5-10 users, A/B tests can quickly collect data from thousands or even millions of users.

For example, a test could reach 10,000 users in the first hour by showing a variant to 50% of traffic. In a week, variant performance across 500,000 visits provides statistically significant results. This scale and speed of sampling with real users is impossible in a lab.

The large sample sizes of A/B testing make the data highly representative of the user base as a whole. Small research recruiting biases vanish. Tests consistently reach diverse use cases and environments.

By combining user research’s qualitative understanding with A/B testing’s large-scale quantitative validation, teams get the best of both worlds – deep human insights and statistically robust performance data. Together these disciplines deliver complete perspective.

Limitations of AB Testing:

1. – Only test what you specify

A key difference between A/B testing and user research is that A/B tests only measure what is explicitly specified upfront as variants. Teams define the changes being tested – like different headlines or button colors. But A/B testing won’t reveal insights beyond those predefined variants.

In contrast, user research is open-ended, allowing participants to share all their perceptions, issues, and ideas while interacting with a product. Research uncovers unspecified pain points and reveals unexpected human contexts.

For example, A/B testing button colors won’t reveal that users are confused by unclear navigation. But user research sessions and interviews might uncover that key flows are hard to find. These qualitative insights inform areas to quantitatively test and optimize further.

A/B testing also can’t capture why certain variants perform better. It indicates what worked but not why. Additional user research helps explain the human psychology behind the data.

Together, A/B testing and user research fill each other’s gaps. A/B tests drive iterative refinement of defined hypotheses while user research resets understanding of real user needs. Their interplay ensures optimization is grounded in human insight.

2.  Don’t always explain why a change worked or didn’t

A limitation of A/B testing is that statistical significance does not equal causality. In other words, a test can reveal that Variant A outperformed Variant B but not explain exactly why it won.

For example, a test might show placing a call to action button above the fold increased conversions compared to below the fold. But without further investigation, we don’t know if it was button placement, visibility, colour, or some other factor that drove higher conversions.

Additional user research helps uncover the human motivations and psychology behind the data. Watching user recordings may reveal many didn’t even notice the below-the-fold button during sessions. This qualitative insight complements the quantitative A/B test result.

A/B testing optimizes by showing what performed best, while user research reveals why. Their combination identifies winning variants and explains the deeper human contexts behind them. 

Relying on A/B testing alone risks optimizing for the wrong, surface-level reasons without understanding root causes. But paired with user research, A/B testing becomes exponentially more powerful.

3. – Not suited for major redesigns or new features

A/B testing excels at incrementally optimizing and refining existing experiences but has limitations around more substantial changes. Big redesigns or wholly new features are better informed by upfront user research than a/b tests.

A/B tests split traffic between a current and new variant. But with an entirely new experience, there may be no relevant current experience to test against. The concept needs open-ended research to understand if it solves user needs before quantitatively optimizing.

For example, adding a live chat customer support feature cannot easily be A/B tested without first building it. However ethnographic research could uncover demand by observing how users request help. This guides whether chat is worth building at all.

With major redesigns, user research also provides deep human insights to inform the new direction before optimizing details. Big changes require stepping back to re-understand the user holistically.

A/B testing provides unmatched optimization power within defined product experiences. However user research expands possibilities by mapping unmet needs for transformation beyond today’s product. Together, they fuel both incremental improvements and radical innovation.

4. – Results can be misinterpreted without context

While A/B testing provides definitive data on what performed better, the results can be misinterpreted without contextual qualitative insights. User research provides interpretation and meaning to complement the raw test data.

For example, a test may show that a simplified homepage layout with fewer options converts better. Without customer context, teams may over-optimize and remove helpful functionality. But user research could reveal that the complexity was mainly confusing for new users, while power users relied on those advanced options. Observing these nuances prevents over-correction.

Tests can also reveal surprising or counterintuitive results. A button color that theoretically should perform worse may beat expectations. User research helps explain why by uncovering emotional responses and subcognitive reactions that users can’t articulate in surveys.

Insights like device usage, accessibility needs, language and cultural patterns uncovered through interviews and observations ensure teams interpret A/B tests through the lens of real user needs. This protects against making changes that backfire despite positive test results.

A/B testing and user research must go hand-in-hand, with research providing the human grounding for statistical findings. Together they ensure changes resonate both empirically and emotionally.

The Role of User Research

Goals:

1.  Understand user behaviours, attitudes, motivations, pain points

User research uncovers deep qualitative insights that quantitative data cannot reveal. By interacting directly with real users, researchers gain empathy and an understanding of behaviours, attitudes, motivations, and pain points.

For example, ethnographic research techniques like ride-alongs and home visits capture behaviours like how families collaborate when planning vacations. Interviews and surveys shed light on attitudes towards concepts like eco-friendly packaging. Focus groups reveal motivations for purchasing decisions and emotional associations with brands.

User research also identifies pain points and friction not easily visible in data. Observing users struggling to locate the contact page or asking for help with account settings highlights issues for design focus. Sessions can even reveal subconscious micro-expressions signalling confusion before users consciously register problems.

These rich qualitative insights help teams deeply understand the “why” behind user behaviours. Research provides the human context and meaning necessary to create genuine connections through design. Rather than designing for data points, teams can craft experiences for real humans with feelings, values, and unmet needs.

The emotional and behavioural understanding unlocked by user research informs what to quantitatively test and optimize next with A/B testing. Together, they ensure solutions resonate at both the statistical and human levels.

2.  Gather insights into user needs and perspectives

User research unveils qualitative insights into user needs and perspectives that quantitative data cannot reveal. Through direct engagement with real people, research uncovers both conscious and unconscious user needs, desires, pain points, and mental models.

For example, interview techniques like jobs-to-be-done help uncover the underlying goals and outcomes users want to achieve, not just surface features. These could include needs like making home repairs less stressful or planning family activities more easily.

Ethnographic research like diary studies and shadowing capture perspectives on everyday routines, contexts, and emotions as users interact with products. This highlights unseen perspectives like the anxiety of health experiences or the joy of creating.

Open-ended interviews allow participants to share perceptions in their own words rather than forced responses. They highlight perspectives like perceived social status of brands or associations with sustainability.

In essence, user research provides the “why” behind behaviors by eliciting qualitative insights straight from the source – real humans. This informs solutions that resolve root causes rather than just optimizing metrics. When paired with A/B testing, research both reveals needs and quantifies solutions.

3.  Identify usability issues and opportunities

A core strength of user research is uncovering usability issues and pain points by observing real people interacting with products. Whereas metrics may show where users struggle, research reveals the qualitative reasons why.

For example, eye-tracking studies expose how users visually scan interfaces, highlighting areas receiving low attention. Observation can identify pain points like unfamiliar flows, confusing navigation, or unintuitive interactions. Tasks and scenarios reveal breakdowns in critical user goals.

Interviews add context to pinpoint root causes – are labels unclear? Are important elements buried? Is the information architecture inconsistent? Researchers can probe user reactions and emotions around usability breakdowns.

Beyond issues, research also reveals untapped opportunities. User requests, feedback, and insights highlight areas for innovation. Observing workarounds and improvised solutions inspires new features tailored to user needs.

The usability insights delivered by research inform what to refine, test, and optimize further using A/B testing. Research provides the human lens to understand usability barriers, while testing drives iterative improvement. Together, they compound usability gains over time.

User Research Methods

1. – Interviews, focus groups

User research techniques like in-depth interviews and focus groups enable open-ended, discovery-driven conversations that reveal rich qualitative data. By directly engaging users in their own words, researchers go beyond surface opinions to uncover deeper emotions, motivations, and unmet needs.

For example, long-form interviews allow users to walk through detailed accounts of their experiences, in their own language and context. This highlights insights that surveys would miss or that users cannot easily articulate, like contradictions between stated and actual behavior.

Focus groups enable observing multiple perspectives at once. Watching users debate ideas and assimilate others’ input sparks new directions. The dynamics reveal social influences and areas of agreement vs contention around topics.

Both interviews and focus groups provide access to emotional feedback and stories that resonate more than stats. For example, participants may recount struggling with a workflow that reduced productivity or feeling pride in creating a product. This emotional context informs human-centred design.

The qualitative data gleaned from open discussion reveals the “why” behind behaviours that quantitative data alone cannot. Interviews and focus groups empower designers to connect with users’ inner thoughts, feelings and mental models.

2. Observation studies

Observation studies involve directly watching users interact with products in real contexts. This captures authentic user behaviours, attitudes, and emotions that users may not self-report in interviews.

For example, site visit observations can uncover how users navigate physical spaces, where they hesitate, and what attracts their attention. Moderators can notice body language, facial expressions, and environmental factors that surveys would miss. This reveals usability issues, points of confusion, and emotional responses.

Other observational methods like diary studies provide insight into real-world routines and environments. Users record videos, photos, or notes capturing moments of frustration or delight during daily activities. Immersing designers into real contexts inspires new solutions.

Observing users makes designers more empathetic. Watching real people struggle to complete tasks or express joy when succeeding highlights what truly matters. Tests can optimize metrics, but observation reveals the human meanings behind the numbers.

Unlike focus groups, observation studies capture authentic user behaviors rather than stated opinions. By combining observation with interview follow-ups, researchers connect insights to user psychology for a complete perspective. The human insights revealed inspire more meaningful designs.

3.  Surveys, customer feedback

Surveys and customer feedback mechanisms like comment cards help gather qualitative insights from a broad swath of users. By querying large samples, researchers can identify themes and uncover common perspectives.

Well-designed survey questionnaires mix open-ended questions with rating scales to blend qualitative colour with quantitative prioritization. For example, an NPS survey could ask “What drove you to rate your likelihood to recommend as you did?” This captures the emotional context behind the number.

Broad surveys also help benchmark user perceptions over time through standardized questions. For example, an annual brand survey may track changing attitudes towards qualities like trust, innovation, and high-quality year over year.

Analysis of open survey responses and customer feedback like product reviews reveals common pain points, bright spots, and requests. Grouping open-ended insights into themes provides a qualitative window into broader user thinking.

The breadth of surveying many users reveals perspectives researchers might miss when speaking to small focus groups. Frequent surveys create ongoing conversations to guide decisions. Combined with direct user observation, surveys quantify the prevalence of attitudes while research explains the psychology behind them.

Importance for product development and improvement

User research is a fundamental, indispensable part of the product development process. Rather than designing for assumptions, research informs every stage of product creation and improvement with real user perspectives.

In the discovery phase, generative research like user interviews and observational studies reveal unmet needs and spark new product ideas. For example, ethnographic research could uncover challenges in the cooking journey, highlighting opportunities for innovative kitchen tools.

During product definition, concept testing helps teams refine ideas and features that resonate with user mental models. Research points product strategy toward ideas users understand and want.

In development, usability testing ensures products work as intended for real users. Studies identify pain points with prototypes long before launch. Research also iteratively improves products post-launch by revealing opportunities.

Without user research, products risk solving problems users don’t have, using language they don’t understand, and containing frictions that deter engagement. Research provides the human insights that distinguish great products from technology for technology’s sake.

In essence, user research transforms products from the team’s best assumptions into solutions directly informed by the living, breathing humans they aim to serve. It is an indispensable driver of innovation and customer delight.

Synergies Between A/B Testing and User Research

Complementary benefits

1.  User research provides context to interpret A/B test results

While A/B testing reveals which variant statistically outperforms, user research is often needed to provide qualitative context to interpret why a particular version won.

For example, a test may show a shorter homepage video has higher conversion rates. But without talking to users, we don’t know if it was the length, the content, or some other factor that drove the difference. User interviews could reveal that a complex storyline lost user attention despite excellent production quality.

Similarly, user research helps explain surprising or counterintuitive test results that go against assumptions. If a theoretically inferior design performs better, focus groups could uncover emotional responses and cognitive biases driving the effect.

In some cases, the test results may even point teams in the wrong direction without insights into user psychology. A “winning” variation may accidentally remove needed functionality or create new issues. Moderated user sessions help catch these pitfalls early before rolling out changes more broadly.

In essence, A/B testing provides the definitive answer for what to change, while user research offers the all-important context into why. Their interplay enables teams to make informed decisions rather than just chasing positive metrics.

2.  A/B testing gives quantitative data to supplement user research

While user research uncovers deep qualitative insights through techniques like interviews and observation, A/B testing validates and quantifies those learnings through live experimentation.

For example, user interviews may reveal that customers find a checkout flow confusing. Researchers can hypothesize ways to simplify the process. A/B tests then experiment with different variations to quantify which statistically reduces dropoff most.

Similarly, focus groups could point to unclear messaging around a product’s benefits. Researchers might craft new messaging concepts that resonate more with users. A/B testing compares versions to numerically measure which copy converts visitors most.

Without A/B testing, teams are left with assumptions and guesswork around what qualitative insights to act on. A/B testing provides the numbers to prioritize efforts and scientifically prove what works.

The qualitative insights from research inform what to test. A/B testing then tracks performance numerically in real user conditions. Together, the human perspective and the hard data drive better decision making than either alone.

In essence, A/B testing grounds qualitative insights in quantitative validation. Its experiments prove which discoveries translate into measurable gains at scale.

3.  User research insights inform hypotheses for A/B tests

Here are a few ways user research insights inform hypotheses for A/B testing:

User interviews may reveal confusion around a product’s messaging. Researchers can then craft alternative messaging concepts that are clearer based on user feedback. A/B tests compare the new versions against the original to see if comprehension improves.

Observation studies could uncover users struggling to complete key tasks like checkout. Researchers then hypothesize workflow modifications to simplify the process. A/B tests experiment with different flows to quantify which reduce dropoff most.

Focus groups may identify specific features users want, like stronger personalization. This sparks hypotheses around how to implement personalization to meet user needs. Testing compares personalized vs generic experiences.

Broad surveys could highlight areas of frustration for customers, like account login. Based on common write-in feedback, researchers can devise improved login flows to test.

Ethnographic research may reveal needs around easier price comparison while shopping. This leads to testing comparison features like side-by-side price grids vs listing each product separately.

4.  A/B testing validates findings from user research

A/B testing is a powerful method to validate and quantify insights uncovered through user research. While research provides qualitative learnings, A/B tests confirm findings translate to measurable gains with real customers. Some examples:

  • User interviews may reveal confusion around a product’s onboarding flow. Researchers can redesign the onboarding based on feedback to address pain points. A/B tests help validate if the redesigned flow statistically reduces dropoff.
  • Observation studies could show users struggling to locate important pages or features. This highlights potential site architecture issues. Researchers can hypothesize information architecture changes to improve findability. A/B testing different structures quantifies which help users complete key tasks better.
  • Focus groups may express desire for more personalized content. Researchers can develop personalized recommendation algorithms based on qualitative data. Testing personalization scientifically proves if it boosts engagement versus one-size-fits-all content.
  • Ethnographic research could identify user needs around streamlining a complex workflow. Researchers then devise a simplified workflow. A/B tests validate if the streamlined flow actually improves task completion rate.

In essence, A/B testing takes insights from small research samples and validates them at scale with real users. The combination of qualitative discovery and quantitative confirmation ensures changes resonate powerfully with customers.

Techniques to connect the approaches:

1. – Interviews and surveys to shape ideas for A/B tests

Here are some ways that interviews, surveys, and other user research can shape ideas and hypotheses for A/B testing:

  • User interviews may reveal that customers find certain messages confusing or misleading. Researchers can then craft alternative messaging and language based on user feedback. A/B tests would then compare the new language against the old to see if it improves comprehension.
  • Broad surveys could highlight areas of the product experience that have low satisfaction scores from customers. Researchers can review open-ended feedback to understand pain points, and hypothesize changes to address them. A/B tests would experiment with different variants of those flows.
  • Concept and messaging testing interviews help researchers refine early-stage ideas based on user perspectives. This provides directional input to carry into structured A/B testing once concepts are more solidified.
  • Diary studies and other observational research may uncover points of friction in key user journeys. For example, users may frequently abandon an onboarding flow. This frames hypotheses around streamlining onboarding that can be tested.
  • Competitive analysis can reveal features lacking in a product compared to alternatives users mention. Brainstorming and user interviews help determine which new features to hypothetically add and test.

2. – Observation and analytics to identify areas for testing

Interviews allow researchers to have in-depth, discovery-driven conversations to uncover users’ deeper perspectives, feelings, and mental models. 

Long-form interviews are particularly insightful for learning users’ detailed accounts of experiences in their own words and contexts. This reveals qualitative insights that surveys may miss.

Focus groups enable observing multiple users’ perspectives at once. Moderators can gain insights from the group dynamics as users debate ideas, assimilate others’ input, and uncover areas of agreement or contention. This provides a window into the social factors influencing user attitudes.

Observational studies involve directly watching users interact with products and experiences. This captures authentic behaviors, pain points, emotions, and environments that users may not be able to articulate accurately in interviews. Diary studies and shadowing users are compelling examples.

Surveys and customer feedback mechanisms like NPS provide quantitative breadth of perspectives from many users. Well-designed questionnaires also incorporate open-ended questions to gather qualitative colour in users’ own language. Themes point to common needs and bright spots.

3. – Interviews to explain surprising A/B test results

Interviews can provide crucial qualitative insights to explain surprising or counterintuitive A/B test results that go against assumptions:

  • If a theoretically inferior design variation performs better in an A/B test, user interviews could reveal emotional responses or cognitive biases driving this effect. For example, users may find a simpler layout more visually pleasing and trustworthy, even if it lacks details.
  • When a variation wins that removes seemingly helpful features or content, user testing may uncover those elements were actually confusing, unwanted, or distracting in practice from a usability standpoint.
  • Interviews help researchers deeply probe why users select particular options and not others during split testing. For example, a specific image or phrasing may trigger certain associations or meanings that strongly resonate.
  • Test analytics may show a variation performs differently for certain user segments. Follow-up qualitative research can uncover differing needs, perspectives, or contextual factors between groups that explain the variance.
  • If the “winning” variation degrades other metrics like satisfaction or retention later on, interviews may reveal it created new user issues or undermined long-term engagement.

In essence, interviews provide indispensable “colour behind the numbers” to explain what is really driving key user behaviours and metrics, especially when the data contrasts with conventional wisdom. The human insights inform sound interpretations.

Frequently Asked Questions

Q: How do I get started with A/B testing?

Answer: Use a dedicated A/B testing platform to handle the technical setup and run experiments. Start with high-traffic pages and key conversions. Test simple changes like text and visuals first.

Q: What kinds of things can I test?

Answer: Test any part of the user experience – copy, headlines, layouts, images, colours, workflows, features, etc. Let user research uncover problems to focus efforts.

Q: How many variants should I test?

Answer: Limit to 2-3 variants focused on a single change. Too much dilute statistical power across versions.

Q: What sample size do I need?

Answer: For conclusive results, aim for at least hundreds or thousands of conversions per variant depending on traffic. Use power calculators.

Q: How long should tests run?

Answer: Let tests run 1-2 weeks minimum unless there is an unambiguous winner. Ensure statistical significance.

Q: What are some best practices?

Answer: Target single changes per test, don’t overlap tests, ensure thorough QA, and limit customer impact.

Q: What user research methods work well?

Answer: Interviews, observation studies, surveys, analytics, and usability testing surface problems and guide hypotheses.

Is your CRO programme delivering the impact you hoped for?

Benchmark your CRO now for immediate, free report packed with ACTIONABLE insights you and your team can implement today to increase conversion.

Takes only two minutes

If your CRO programme is not delivering the highest ROI of all of your marketing spend, then we should talk.