The Gazelle Blog

How to Handle Scope Changes Without Burning Bridges

Written by Gazelle Global | April 1, 2026

In market research, sample size is often treated as a proxy for confidence. The bigger the sample, the more defensible the data, or so the conventional wisdom goes. But what happens when the sample is inherently small, not because of a design flaw or a budget shortcut, but because the research situation simply doesn't allow for anything else? This is not an edge case. It is a challenge that shows up across industries, therapeutic areas, and research budgets, and it demands a methodology that most researchers have never been formally introduced to: Multi-Criteria Decision Analysis, or MCDA.

Three Different Problems, One Common Constraint

Small samples don't all come from the same place, and understanding why yours is small matters for how you design around it.

Sometimes the population itself is the constraint. In rare disease research, for example, the number of healthcare professionals actively treating a given condition may number in the dozens globally. No amount of recruitment effort or panel access changes that ceiling. You work with those who exist.

Sometimes the budget is the constraint. Highly specialized professionals, whether that's a C-suite executive, a subspecialty physician, or a technical expert in a niche industry, can command $500 or more per completed interview. At that rate, reaching even 50 respondents can blow through a project budget before you've generated a single insight. The population may be large enough in theory, but the economics cap your sample in practice.

And sometimes it's a findability problem. Niche B2B professionals, think plant managers for specialized manufacturing processes, IT architects in specific enterprise environments, or procurement leads for highly regulated categories, exist in sufficient numbers but are extraordinarily difficult to locate, screen, and recruit. Gazelle's clients know this situation well; it's often the reason they come to us in the first place, after other panel partners have already tried and failed to find the right respondents.

Three different root causes, but the same outcome. You are making a significant research-backed decision with a small sample, and your methodology needs to be chosen accordingly.

Why the Standard Playbook Falls Short

When researchers face complex, multi-attribute decisions, the instinct is to reach for conjoint analysis or MaxDiff. Both are well-established, rigorous tools. Conjoint has been the dominant methodology for understanding how decision-makers evaluate competing options since the early 1970s, presenting respondents with product profiles and using statistical modeling to derive how much each attribute actually drives choice. MaxDiff, a simplified variant, is useful for prioritizing a list of attributes or features. Both produce the kind of derived importance scores, quantified, defensible, and granular, that clients and stakeholders find compelling.

The problem is that both require substantial sample sizes to function as intended. Conjoint typically requires at least 30 respondents per target group to generate statistically reliable utility scores. MaxDiff has similar constraints, and while it handles attribute prioritization well, it evaluates only one dimension at a time, meaning it can't capture the multi-attribute trade-offs that reflect how decisions are actually made. When you're working with 8, 12, or 15 respondents, these tools either break down statistically or simply can't be deployed. Researchers in this position often default to pure qualitative discussion, which is valuable but can lack the structure needed to produce data that travels well inside a client organization.

What MCDA Does Differently

MCDA is a self-explicated methodology that asks respondents to do two things: first, allocate importance weights across a defined set of key attributes so that the weights sum to 100, and then rate each product, company, or option on each attribute individually. The weighted ratings are combined to produce an overall performance score for each option being evaluated. The result is a structured, quantitative output produced by a process with no minimum sample size requirement.

It's worth noting that qualitative researchers often apply versions of this logic intuitively, asking respondents to rank priorities or assign relative value to competing factors. What MCDA does is formalize that instinct into a replicable, scoreable structure, turning a conversational exercise into something that produces data a client can act on. For researchers already comfortable in qualitative settings, the methodology is less an import than a disciplined version of what good moderators already do.

MCDA has been used for years in healthcare by regulators and health technology assessment bodies to support benefit-risk decisions and portfolio analysis. Its application in commercial market research, particularly for informing go/no-go decisions and understanding prescribing or purchasing behavior, is less widespread, which means researchers who know how to deploy it well have a genuine competitive advantage.

The Real Strength: MCDA Inside a Qualitative Discussion

Where MCDA truly earns its place is not as a standalone exercise but as a structured component embedded within a qualitative discussion. This combination does something neither approach can achieve on its own. Pure qualitative discussion is rich and exploratory, but physicians, executives, and specialists naturally gravitate toward the attributes that are most top-of-mind for them, which can skew the overall picture and make it difficult to get a consistent, comparable read across respondents. MCDA, completed individually before the facilitated group discussion begins, acts as an anchor. Every attribute receives equal and deliberate attention from every participant before anyone has been influenced by what others in the room have said.

From there, the qualitative discussion does what it does best. Moderators can probe the reasoning behind the scores, explore how respondents interpret different attribute levels, surface the real-world trade-offs they would actually make, and understand what competitive dynamics or practical constraints are shaping their thinking. When the scores and the narrative are analyzed together, decision-makers get both the "what" and the "why," and the output carries the credibility that neither the numbers nor the conversation could establish on their own.

Where Expertise Makes the Difference

None of this happens automatically. The quality of MCDA output depends entirely on how well the study is designed and executed, and this is precisely where experienced research partners earn their value.

Attribute selection is foundational. Include too many criteria, and respondents struggle to discriminate meaningfully between them. Include too few, and you miss the nuances that actually drive decision-making. The attributes need to reflect real-world priorities, not simply the features a client most wants to validate. Sequencing matters too: in studies where MCDA has been used effectively, respondents complete the exercise before any in-depth product or topic discussion begins, specifically to prevent group conversation from biasing individual scoring. That single design decision, straightforward in principle but easy to overlook under the pressure of a tight discussion guide, is the kind of judgment that separates experienced qualitative researchers from less seasoned teams.

Moderator skill is equally critical. The MCDA framework only illuminates how respondents are thinking if the moderator knows how to translate the scores into the conversation, recognizes which ratings warrant deeper probing, and manages group dynamics so that every voice contributes. And the analysis itself requires genuine interpretive experience: integrating quantitative scores with conversation data, making sense of apparent contradictions between what respondents say and how they score, and translating the combined findings into a recommendation that a leadership team can actually act on.

When the Stakes Are High, Design Is Everything

Whether you are trying to reach a handful of subspecialty physicians treating a rare condition, a small cohort of high-cost executive respondents, or an elusive category of niche B2B professionals, the constraint is the same, and so is the solution. MCDA, embedded in a well-structured qualitative program, can deliver the directional intelligence needed to move forward with confidence — even when your total respondent count fits in a single focus group.

At Gazelle Global, we have spent more than 30 years helping the research community find and engage exactly the kinds of respondents who are hardest to reach. Our clients often come to us because the population is small, the respondents are expensive, or no one else has been able to find them. We understand that in those situations, every conversation has to count, and that means the study design needs to be right before a single interview begins. Our consultative approach means we don't simply execute the plan handed to us. We help build the right plan from the start, drawing on deep methodological expertise, a global network of specialist moderators and local experts, and rigorous project management that ensures good design translates into high-quality data in the field.

If you are facing a complex research decision with a small or hard-to-reach sample, don't just come to us for a bid. Come to us for a consultation. Let's think through what will actually set your study up for success.

Ready to talk through your research design challenges?

Contact Us