Artificial Intelligence

Will AI Replace Primary Marketing Research? A Question Worth Taking Seriously

By Noah Pines

Over lunch last week at one of my favorite restaurants in Northern New Jersey, a client shared a question that had come from senior leadership -- not directly from their boss, but from a few layers up the organization.

“When does AI start replacing primary marketing research?”

We both paused for a moment. And candidly, we both reacted the same way -- a bit of discomfort, maybe even a little apprehension.

Not because the question is unreasonable. In fact, in today's world and with non-stop news about the magic of AI, it’s entirely predictable.

Across pharma, and really across corporate America, companies are investing heavily in AI. The expectations are pretty consistent regardless of industry: greater efficiency, faster decision-making, and ultimately, doing more with less. So it’s only natural that leaders are starting to look across functions and ask: where can this apply?

Insights and analytics (I&A) is an obvious place to look; after all, data is our native language and our primary currency. But just because the question makes sense doesn’t mean the implications are straightforward.

Why This Question Is Emerging Now

When you sit with the question for a bit, it becomes clear that it’s not really about “replacing research.” It’s about something more fundamental.

It’s about speed. It’s about cost. And it’s about control.

  • Can we get insights faster?
  • Can we rely less on external partners?
  • Can we bring more capability in-house?

AI, at least on the surface, seems to offer a path to all three.

We’re already seeing it in action. Examples we've all seen, either first-hand within our companies and clients, or presented at industry seminars: synthetic respondents, automated summaries, AI-assisted survey design, pattern detection across large datasets. Each of these, on its own, is genuinely useful. Together, they start to paint a picture of a future where traditional research feels…optional.

That’s where I feel we need to take a big step back.

The Core Issue: AI Understands Patterns, Not People

At its core, AI is extraordinarily good at identifying and reproducing patterns. But it does not experience the world in all of its detail and nuance.

It doesn’t sit across from an NP trying to make sense of a new treatment landscape. It doesn’t sense the hesitation in a patient’s voice. It doesn’t navigate the real constraints of access, affordability, or fragmented care...or feel the gut punch of a diagnosis.

And importantly, it cannot create insight where there is no data to begin with.

This becomes especially relevant in pharma, where much of what we study exists in spaces that are inherently sparse, fragmented, or rapidly evolving: rare diseases, emerging mechanisms of action, shifting standards of care, and complex patient journeys where needs, decisions, and emotions can change dramatically from one moment to the next.

In those environments, the data required to build reliable synthetic models is often incomplete at best.

So what happens?

The model fills in the gaps. It extrapolates. It interpolates.

And sometimes, that’s useful. But we should be very clear about what it is: it’s not direct observation. It’s not lived experience. It’s an informed approximation.

It’s a subtle distinction, but it really matters.

The Risk of Creating Distance From the Customer

The more I’ve thought about this and had informal discussions with others at ThinkGen, the more it comes down to a simple and essential concept: The further we move away from the real human customer, the greater the risk.

Primary marketing research exists because it connects us directly to reality. I'd love to exclaim: "Always has, always will." It allows us to hear, in real time, how physicians are thinking (both rationally and irrationally), how patients are navigating their challenging journeys, and how decisions are being made given real world circumstances.

When we start to rely on AI-generated representations of that reality, we are introducing a layer. Sometimes that layer helps. It can synthesize, organize, and accelerate.

But it can also distort.

And the challenge is that AI outputs often come across as highly confident, highly coherent, and very complete -- even when they’re not.

So the question isn’t just whether AI can generate insights. It’s whether those insights are still anchored closely enough to the REAL people we’re trying to understand.

The In-Housing Conversation -- and What Gets Lost

Alongside this, there’s another idea that often comes up in the same breath: If AI can do more of the work, can we bring research in-house?

This isn’t a new conversation. It’s come up in different forms over the years. And in some cases, building internal capability absolutely makes sense. I was having a conversation with another client recently who has brought certain types of studies and analysis in house.

But there’s a tradeoff that doesn’t always get enough attention.

Objectivity.

External partners bring something that’s difficult to replicate internally: independence.

They’re not shaped by the same organizational pressures, assumptions, or internal narratives. That distance matters. It allows them to ask the uncomfortable questions, challenge the prevailing story, and surface insights that are often harder to see from the inside.

Just as importantly, they bring perspective -- perspective built over years of working across companies, categories, and therapeutic areas. That accumulated experience becomes a strategic asset, not just a service.

When everything moves in-house, that lens inevitably narrows.

Pharma companies are rightly focused on discovering and commercializing innovative new treatments. That’s their core competency. Marketing research, at its best, plays a different role: providing an objective, external view of how those innovations are perceived, adopted, and experienced in the real world.

When those roles begin to blur, so does the clarity of the insight.

And in high-stakes situations, losing that outside perspective can create blind spots that are easy to miss in the moment.

Where AI Actually Adds Value

None of this is to suggest that AI doesn’t belong in marketing research. Quite the opposite; it’s already proving to be incredibly valuable. In my own work, it’s become an indispensable partner.

It can accelerate synthesis, help identify patterns across large datasets, support hypothesis and concept generation, and make the entire research process more efficient.

Used well, it can free up time, allowing researchers to focus more on interpretation, storytelling, and strategic thinking.

But that’s the key point.

AI works best when it enhances the process, not when it replaces it.

The most effective model, at least for now, is one where AI and human expertise work together -- each doing what they’re best at. "AI + HI" as one of my clients likes to say.

How to Push Back, Thoughtfully

For those of us in I&A, the challenge is not to reject this conversation. It’s to engage with it in a way that is thoughtful, grounded, and credible.

That means moving away from instinctive reactions like “AI can’t do this,” and toward a more nuanced perspective, based upon what we know to be its assets and liabilities:

  • Here’s where AI adds value.
  • Here’s where it introduces risk.
  • And here’s where human input remains essential.

In practice, that often means reframing the discussion:

  • Efficiency gains are real -- but they are not the same as insight generation
  • Synthetic models are useful -- but only as good as the data behind them
  • Speed is valuable -- but not if it comes at the expense of accuracy or context

And perhaps most importantly:

In a regulated industry, the cost of being wrong is not theoretical.

The Path Forward: Staying Close to Reality

If there’s a direction this is heading, it’s not toward replacement.

It’s toward integration.

A future where AI helps us move faster and see patterns more clearly, while human researchers ensure that those patterns are grounded in reality. A future where technology enhances our understanding, while at the same time not replacing our connection to the people we’re trying to serve.

Because ultimately, that’s the role we play.

Not just generating insights. But ensuring that decisions are informed by what’s actually happening: in exam rooms, in hospitals, in homes, in assisted living facilities, in the lived experiences of patients and providers.

And for that, we still need to stay close to the customer.

What’s Actually at Stake

That question -- “when does AI replace primary research?” -- isn’t going away. If anything, we’re going to hear it more often.

The goal isn’t to dismiss it. It’s to answer it well.

With clarity. With nuance. And with a firm understanding of both the potential and the limits of what we’re working with.

Because in the end, this isn’t just simply about protecting a function and/or our jobs. It’s about protecting the quality of the decisions we help make.

And that still starts with understanding people: directly, authentically, and without too much distance in between.