top of page

Research Process Overview

On this page you can view my standard research process. You can click the following to jump to details on each step of my process:

  1. Discovery

  2. Define The Goals

  3. Choosing The Methodology

  4. Recruiting Participants

  5. Writing The Test Plan

  6. Running The Study

  7. Synthesis

  8. Sharing The Story​​

Discovery

Before I even start thinking about methods or recruiting, I like to get a full picture of what’s going on. This means getting a solid understanding of both the product or experience and the business side of things. I’ll usually start by asking stakeholders a lot of questions about the product, flow, or feature such as:

​

  • What does the product, flow, or feature do? Who is it for? What is its goal?

  • How does it fit into the bigger product or digital ecosystem?

  • What are the current goals from the business side (growth, retention, support deflection, etc.)?

  • How is it performing right now? What’s working, and what’s not?

  • Are there existing data points we can look at—support tickets, analytics, survey data, etc.?

  • What do we already know about the user experience—and what do we think we know?

​

I also like to experience the product or flow myself to get a feel for what users are dealing with, as well as get a clear understanding of how the product or flow works.

​

Once I get a clear picture of what it is I'll be working, I then work with stakeholders to align on what they’re hoping to learn, what decisions are coming up, and how research can actually help. This early context-building helps make sure the research is grounded, relevant, is set up to drive action, and can be used to make clear decisions.

Defining The Goals

Once I’ve got the lay of the land, it’s time to define what we’re actually trying to learn. This part is all about focus. I work closely with stakeholders to turn broad questions into clear, actionable research objectives.

​

We’ll dig into:

  • What decisions are coming up that this research will support?

  • What do we absolutely need to know—and what would just be nice to know?

  • Are we trying to measure behavior, understand opinions, or both?

  • Are we leaning more qualitative (in-depth insights) or quantitative (measurable patterns)?

  • What assumptions or hypotheses are we working with?

​

At this stage, I’ll help the team draft a hypothesis if we’re testing a specific experience or concept—something like, “We believe users will prefer Flow B because it has less steps than Flow A, which will save users time.”

​

I also make sure we’re aligned on the scope:

  • What’s realistic for the time and resources we have?

  • Do we need to break the work into phases?

​

This part of the process sets the tone for everything that follows. By getting really crisp on our goals, we can choose the right methods, stay focused, and make sure the insights we collect are actually useful for the people who need them.

Choosing The Methodology

Now that we know what we’re trying to learn, it’s time to choose the best method (or mix of methods) to get us there. This part is like building a research toolbox—no one tool works for everything, so I tailor the approach to fit the problem, timeline, and goals.

​

Here’s how I think through it:

  • If we are asking why or how to fix something → Qualitative methods like user interviews, usability testing, or diary studies.

  • If we want to know how many or how muchQuantitative methods like surveys, A/B testing, or analytics reviews.

  • If we’re focused on what people doBehavioral methods, like usability tests, contextual inquiries, or clickstream analysis.

  • If we’re focused on what people say or think → Attitudinal methods like interviews, concept testing or surveys.

​

Sometimes it’s a combo—like doing a survey first, then interviews to go deeper. I always aim for methods that’ll get clear, useful insights without overcomplicating things.

​

One of my favorite resources to refer back to when deciding which methodology is this NN/g article: When to Use Which User-Experience Research Methods

Recruiting Participants

The best research in the world doesn’t mean much if we’re not talking to the right people. At this stage, I focus on identifying and recruiting participants who actually reflect the users we care about for the study—whether that’s current customers, potential users, or even folks who’ve had a poor experience and stopped using the product.

​

Here’s what I think about:

  • Who are we designing for? (new users, power users, people with specific needs, etc.)

  • What context matters? (device used, location, experience level, accessibility needs)

  • Are we trying to represent different perspectives? (diverse age groups, languages, regions, etc.)

​

Once we’ve defined the target audience, I either partner with internal teams (like research ops) or use recruitment tools to find people who match our criteria. If we’re doing accessibility-focused research, I work extra hard to make sure we’re including folks with disabilities who are often overlooked.

 

​ The more intentional we are with who we talk to, the better the insights we’ll get.

Writing The Test Plan

Before any research kicks off, I always take time to write a clear and structured test plan. This is my blueprint for the study—it outlines what we’re trying to learn, how we’ll go about it, and ensures everyone’s on the same page.

​

The most important part? Crafting the right questions—and how I write those depends entirely on the methodology we’re using.

Some examples of how might formulate questions:

  • Interviews: I write open-ended, neutral questions avoid leading language and build in space for follow-ups, because the best insights often come from the unexpected tangents.

  • Usability testing: I focus on realistic tasks and scenarios rather than instructions. I want to observe natural behaviors, so I frame things like: “Show me how you would…” instead of “Click here, then here.”

  • Surveys: I make sure every question is clear, direct, unbiased, and targeted to what we need to learn. I also think carefully about answer types—multiple choice, open text, Likert scales—depending on how we’ll analyze the data.

  • Tree tests or card sorts: I keep wording consistent and avoid technical jargon so we’re really testing users’ understanding—not tripping them up with language.

​

The test plan also includes logistical details: who we’re testing with, how we’re collecting data, how we’ll measure success, and how findings will be shared.​

Running The Study

Once the plan and participants are locked in, it’s go time. 

​

When moderating, I always aim to create a space where participants feel comfortable being honest (even if they think they’re “doing it wrong”—that’s usually where the best insights come from). If it’s an unmoderated or quant study, I make sure everything is tested beforehand so we’re not chasing bad data later.

​

I also take a lot of notes during sessions—real-time thoughts, gut reactions, and moments that stand out. These notes help me quickly identify patterns and spark ideas as the study unfolds. It also keeps the momentum going so I can start synthesizing right away.

​

During this phase, I’m observing behaviors, listening carefully to word choices, and watching for patterns, contradictions, or surprises. I also stay in close touch with the team—sharing early impressions or interesting quotes in real time so everyone stays engaged and excited.

Synthesis

After the sessions wrap, it’s time to dig into the data. This is where the magic happens—turning raw observations into clear, meaningful insights.

​

I review everything: notes, recordings, patterns in behavior, quotes that stand out, and anything unexpected. I look for connections across participants, but also what didn’t happen or what wasn’t said (those silences are telling, too).

​

I organize findings into themes and tie each insight back to our original research goals. This part also includes identifying pain points, mental models, unmet needs, and opportunities for improvement. If we tested multiple versions of a design, I’ll map out what worked, what didn’t, and why.

​

Whether it’s five interviews or 500 survey responses, my goal is to tell a focused story about what we learned—and what to do about it.

Sharing The Story

Insights are only useful if people hear—and remember—them. So I don’t just drop a 30-slide deck and walk away. I tailor my findings to the audience, whether that’s designers, product managers, engineers, or execs.

​

Sometimes it’s a detailed report, sometimes it’s a one-pager or a workshop-style readout with key quotes, themes, and recommendations. I love using visuals—journey maps, flow breakdowns, annotated wireframes—whatever helps bring the user’s experience to life.

​

Most importantly, I connect insights to action: What should we do next? What’s urgent? What’s an easy win? I also leave space for discussion—so the team can ask questions, challenge assumptions, and start planning next steps.

​

In short: I don’t just hand off findings—I spark conversations that move things forward.

bottom of page