Can We Stop Trying to Validate Patient Share Against Market Share?

Understanding What Survey-Based Demand Measures (and What It Doesn’t)

  • “How do you go from preference share to market share?”

  • “How do you adjust for overstatement?”

  • “Do you calibrate your survey estimates to secondary data?”

  • “Can you validate your demand estimates?”

These questions come up constantly in pharma demand discussions—and they reveal a deeper confusion about what survey-based demand estimates actually represent.

The first principle is simple: survey-based estimates are not forecasts. They are not volumetric predictions and cannot be plugged directly into financial models. The unit of analysis in an HCP survey is patient share, and converting that into volumetric share requires a transfer function that varies by category—dosing, duration, compliance, persistence, and other real‑world factors.

Why “Preference Share” Doesn’t Belong in Pharma

The term “preference share” is a relic from early conjoint work, where respondents rated and sorted profiles and analysts inferred choice from the highest preference score. That logic works—barely—in consumer goods, where the respondent is the decision-maker and the unit of analysis is the individual consumer.

In pharma, the link between what a physician reports and what the market ultimately does is indirect. Patient share is not brand share, and treating them as interchangeable is the root of many misinterpretations.

Can We Validate Survey-Based Demand with Secondary Data?

In consumer goods, choice models produce brand share estimates that can be calibrated to actual market share. That calibration becomes the baseline for scenario modeling.

In HCP allocation models, the output is patient share, while secondary data provides brand share. These are fundamentally different constructs. Attempting to calibrate one to the other is not just difficult—it is conceptually invalid.

Separating Survey Error from Forecasting Error

Pharma demand assessments contain two broad classes of error:

  • Survey-based error: overstatement, optimism, hypothetical bias

  • Forecasting error: assumptions around awareness, access, supply, adherence, and persistence

There are also structural assumptions that sit between these two categories. For example, many models implicitly assume 100% awareness or unconstrained supply—issues that are typically handled during the volumetric conversion stage rather than in the survey itself. And depending on the design, there may be attribute‑level measurement bias driven by non‑compensatory preferences or overly complex conjoint tasks. These complications matter, but they sit outside the core question of how we interpret survey‑based demand, so they’re beyond the scope of this brief.

The two sources of error must be addressed separately. Overstatement is a survey issue; volumetric conversion is a forecasting issue.

Practical Ways to Address Survey-Based Overstatement

The simplest and most widely used correction is the Juster scale, which captures likelihood of usage on an 11‑point scale. Each point corresponds to an empirically derived probability of actual prescribing. A respondent allocating 20% of patients to Product X with a 50% likelihood becomes an adjusted 10%.

A more sophisticated approach is to model prescribing likelihood statistically across multiple “gates,” such as:

  • Assessment of the new product profile

  • Satisfaction with current treatments

  • Adoption propensity

These constructs, measured through multiple items, can be modeled jointly to estimate a respondent’s true trial likelihood.

The goal is not perfection. It is triangulation—reducing bias by integrating multiple signals.

What “Validation” Really Means for Survey-Based Demand

A question that comes up often is whether a survey-based estimate of patient share is “validated” by secondary data. Conceptually, this is no more meaningful than asking whether a single response on a 5‑point Likert scale is “validated” by real‑world data.

In both cases, the problem is the same: we are trying to validate an individual response level or point estimate against an outcome it was never designed to mirror. The right object of validation is not the specific number (a 4 on a scale, or 23% patient share), but the construct that the measure is intended to capture.

In HCP demand work, that construct is anticipated prescribing or patient allocation—not realized brand share. Patient share and brand share are different constructs: one is a stated allocation under specified assumptions; the other is an observed outcome shaped by access, adherence, promotion, and system effects. Treating them as directly comparable and asking for “validation” between them is therefore conceptually misguided.

What is meaningful is to ask whether the survey-based measure:

  • Has internal validity: it behaves sensibly within the data

  • Has convergent validity: multiple, independently measured indicators of demand point in the same direction

If those conditions hold, and overstatement has been addressed appropriately, then the adjusted patient share is a valid survey-based measure of demand. It does not need to be “validated” to brand share to be legitimate, because it is not trying to be brand share.

Bringing It All Together

Survey-based demand isn’t a forecast—it’s one piece of the picture. Correct the bias, validate the construct, and let the forecasting engine do the rest.

Next
Next

Demand Assessment with HCP and Patient Samples: A 2026 Perspective