top of page

Scaling a UXR program isn't for the faint of heart.

But there are some key guidelines and practices I've used to stand up and roll out  effective, trustworthy quantitative and mixed-methods UX Research programs across organizations. Take a peek inside my thought processes and some of the challenges I faced and overcame in the areas of:

Mixing Methods

My go-to summary of mixed methods research: quantitative tells us what users are doing; qualitative tells us why they do it. Focusing on only one leaves a gap in knowledge and breeds assumptions that often lead to the wrong decision without any clear path to the right way. 

​

This visual model is my clearest way of illustrating the product and mixed-methods research lifecycle. For an early stage team with an existing product, the best place to start is with evaluative user research insights and to form hypotheses that translate into testing ideas or confident changes. For later stage teams with multiple products and existing test and research results, starting at any point in the wheel works. 

Screenshot 2025-08-22 at 1.30.30 PM.png

For example, you can start with a trend you have seen from site or product usage data. Let's take the hypothetical use case of a haircare site. Maybe users are exiting the site after engaging with a quiz that helps customers decide between multiple hair products. From the usage analytics (exit rate and engagement with quiz), we know what the users are doing. This triggers a question about why users are leaving the site, so over to user research it goes. To keep it lightweight, an unmoderated usability test with questions about user needs and expectations would be a great way to check for any technical issues and unexpected reasons the quiz is not giving users what they need. In a well-designed usability test, there tend to be multiple themes that surface among respondents, and from these themes we can deduce a few assumptions and corresponding recommendations for change. Since there are still questions about what exactly is causing users to leave, and making that recommended change could threaten conversions, it's not a confident change to be made, so it goes into AB testing. The results of that test tell us if the change causes a significant lift to our main KPI, exit rate, and if so the change is made.

Roadmapping and Strategic Prioritization

The best problem to have as a UX and online experimentation team is too many test ideas. Sifting through a backlog can be a pain, but there are strategic ways to rank and prioritize what comes next and how to talk through them with a team.

​

The most straightforward way I have found to roadmap testing ideas and UX studies includes these metrics:

  • Level of Effort

  • Potential Impact

  • Je Ne Sais Quois

​

It's rarely so simple as these 3 factors, so if it is, we embrace it and enjoy the convenience. I have found these secondary factors to be good tiebreakers when needed:

  • SEO Value

  • Interdependent products

  • Globally applicable changes (e.g., suite of websites with similar content and/or assets)

Strategic Alignment with Tactical Changes

Strategies keep work focused, intentional and impactful. They are also excellent guiding lights for teams of any size and help people think critically and more independently with a reference rather than needing to consult as frequently.

​

The best way I have found to develop strategies for products is to use a combination of internal stakeholder perspectives and themes that come up in user research: what are the consistent requests and complaints user voice? Cross-functional design workshops are an excellent way to uncover internally what folks across teams believe about products, and double as a way to educate them on user insights and engagement with the product.

​

Much of my experience in strategically building products is in the e-commerce world, focusing on interactive customer tools (quizzes, sort/filter tool), product information content, and page templates. Here's an example of how the strategies identified for one of the product page templates (left column) align with the tactical products and content being worked on (top row):​
 

Screenshot 2025-09-04 at 1.01_edited.jpg

The strategies here were developed from a design workshop (cross-functional internal stakeholders) and user studies (external users). Ideally a given tactic will check off multiple strategic goals, and the more that one checks off, the higher impact score it can be assigned (back to prioritization). If an idea does not align with one of the strategic goals, it either does not belong in the roadmap or needs to have glaringly obvious justification (e.g., major technical error or usage roadblock). 

Case Studies 

CHALLENGES

In the early days establishing testing and research programs, I have encountered these challenges, which are interrelated:

​

  • Lack of cross-functional buy-in & engagement

  • No system to document findings and next steps​

  • No prioritization system for tests and product changes

  •  Inconsistent, incomplete KPIs and reporting

  • Competing changes/work with other teams

SOLUTIONS

Lack of cross-functional buy-in is often a symptom of a program that lacks systematic and strategic decision making. This is often compounded by the testing and research being seen as "competition" by other teams.

​

To solve these challenges, I looked at the root issue: incomplete documentation and no strategic planning. I first led strategic planning sessions with the team and then met with other team leads to uncover their perspectives and gaps. While training my team on testing and UX best practices, I developed systems for tracking results and next steps, which were accessible by other teams. We also developed a prioritization framework to guide our testing, research and product roadmaps.

​

Importantly, we spent time educating other teams on our work, results, and encouraged questions and doubts so we could inform and debunk. 

​

These changes created a strategic, critically thinking team that fostered collaboration and trust.

  • White LinkedIn Icon

© 2023 by Olivia Miller. Proudly created with Wix.com

bottom of page