Pharmaceutical Industry
Artificial Intelligence

From Ex Machina to PMRC: The Psychology of Talking to a Digital Entity

By Noah Pines

A number of years ago, I watched Ex Machina, a film about a reclusive tech CEO who invites a young programmer to evaluate the human qualities of an advanced humanoid robot. The experiment is simple and unsettling: can a machine engage a human so convincingly that the interaction feels real? More importantly, can it evoke trust, intimacy, even vulnerability?

At the time that the movie was released, the concept felt speculative -- an elegant thought experiment about consciousness and control. Sitting in Newark last week at the Pharma Market Research Conference PMRC meeting, listening to multiple presentations on AI-assisted moderating and fully agentic qualitative moderators, I had a different reaction. The experiment is no longer fictional. It is underway in our industry.

And the question is no longer whether AI can conduct a qualitative interview. The question is how people will respond when it does.

The Rise of the Agentic Moderator

Let’s start with the technology. We have crossed a threshold. Today’s AI systems can simulate highly technical conversations with remarkable realism and fluency. They can speak and recognize terminology across oncology, infectious diseases, neurology... any therapeutic category really. They can dynamically generate follow-up questions, probe inconsistencies, and adjust questioning paths in real time.

(As an aside, my colleague John Capano, PhD built an AI-based expert HIV-treating physician for me to interview during a recent ThinkGen AI innovation summit. It was remarkable.)

In other words, today's genAI can do much of what a skilled qualitative moderator does: listen, respond, interpret, and probe.

At PMRC, several companies demonstrated impressive AI-assisted moderators that support human facilitators in real time -- suggesting probes, identifying themes as they emerge, flagging inconsistencies. Others went further, showcasing fully autonomous AI agents capable of independently conducting qualitative in-depth interviews.

From a commercial perspective, the appeal is obvious. Lower cost. Faster fieldwork. 24/7 scalability. Instant availability to a ready respondent. Global deployment without travel. Immediate transcription and thematic coding. For certain use cases -- message testing, basic campaign testing, structured materials testing -- the efficiency gains are compelling.

But efficiency is the easy part of the story.

The harder, and more consequential, question is human.

Will Respondents Open Up to a Machine?

Qualitative research in pharmaceuticals is rarely about surface-level opinions. We are often exploring HCP hesitations about a new treatment, patient fears about side effects, unspoken biases in prescribing behavior, emotional barriers to adoption, or competitive dynamics that respondents may be reticent to articulate.

In traditional qualitative work, the moderator’s role is not just to ask questions. It is to create a psychological safe space for meaningful conversation. It is to connect with empathy. It is to display natural curiosity that fosters a respondent's opening up. It is to probe in a timely and choiceful manner.

So what happens when that moderator is a digital entity?

Will a physician feel comfortable admitting uncertainty about a treatment decision to a machine? Will a patient disclose deeply personal apprehensions about disease progression or quality of life to an AI moderator? Will a CMIO or health system executive speak candidly about internal misalignment, digital infrastructure gaps, or the political realities of formulary decision-making when the “person” on the other end of the line is not a person at all?

There are at least two plausible futures.

In one, respondents hold back. They perceive the AI as impersonal or transactional. They fear data misuse. They doubt that a machine can truly understand nuance. They don't take the interview seriously (or misbehave) because, well, they are talking to a robot. Emotional depth suffers.

In the other, something counterintuitive happens: respondents open up more.

Think about how we interact with our iPhones. These devices know things about us that few other humans do. Our search histories, our health data, our late-night questions. We disclose personal information to digital systems with surprising ease. Why? Because there is no perceived judgment. No facial expression. No subtle social pressure.

In some contexts, the absence of human evaluation may reduce inhibition.

For sensitive topics -- treatment non-adherence, off-label behaviors, emotional distress -- an AI moderator might paradoxically elicit greater candor than a human ever could.

Which of these futures prevails will depend less on the technology and more on psychology.

Can a Machine Project Empathy?

Empathy is the cornerstone of effective qualitative moderating, particularly when engaging with patients and care partners. The skilled facilitator reads micro-expressions, adjusts tone, redirects questions, leans in at the right moment, knows when silence is productive and when it is uncomfortable.

The natural question is whether a machine can replicate this.

Technically, AI can already simulate empathic language with surprising fluency. It can acknowledge emotion, validate perspective, and express understanding in ways that seem attentive and responsive. It can modulate tone and pacing. With multimodal inputs, it may even detect subtle shifts in vocal cadence or sentiment and adjust accordingly. In some cases, these systems can project such sustained focus and calibrated empathy that users experience the interaction as deeply personal. We are already seeing examples of individuals forming strong emotional attachments to conversational AI -- not because the machine feels, but because it performs the signals of feeling with remarkable consistency.

But simulation and experience are not the same.

Does a respondent need to believe the moderator feels empathy? Or is it sufficient that the moderator behaves empathically?

From a research validity standpoint, what matters is not whether the AI has consciousness, but whether the respondent experiences the interaction as supportive and safe.

If respondents feel heard, the data may be just as rich -- regardless of whether the “listener” has a pulse.

This is where the conversation becomes philosophical. We are not just testing technology; we are testing human perception.

A Phased Evolution, Not a Binary Shift

From my standpoint, the introduction of agentic qualitative moderators will roll out in stages.

Stage One: AI-Assisted Human Moderation. Here, AI functions as an augmentation layer -- suggesting probes, organizing notes, identifying emerging themes, even stress-testing hypotheses in real time. The human remains in control. This stage is already here.

Stage Two: Hybrid Moderation. In this model, AI conducts portions of the interview -- structured segments, standardized modules -- while a human moderator joins for deeper, emotionally complex exploration. Efficiency improves without fully relinquishing human connection.

Stage Three: Autonomous Moderation for Targeted Use Cases. Certain study types -- message comprehension, attribute prioritization, basic drivers and barriers, testing materials for clarity -- may migrate entirely to AI moderators. Cost and speed advantages will drive adoption, particularly in high-volume research programs.

What I do not foresee is a wholesale replacement of human moderators in all qualitative work.

There will remain studies where emotional nuance, ethical sensitivity, or strategic ambiguity demand a human presence. Strategic positioning. Launch strategy deep-dives. Competitive war-gaming. Explorations of patient identity and lived experience.

In these moments, the ability to read the room -- literally and metaphorically -- will matter.

Implications for Commercial and Insights Leaders

For those of us in commercial, insights, and analytics roles, the strategic implications are significant.

First, we must resist the temptation to frame this as a cost conversation alone. If we reduce agentic moderation to a line-item savings exercise, we will miss the larger opportunity -- and the larger risk.

Second, we need to design experiments deliberately. Rather than debating in the abstract whether AI moderators “work,” we should run parallel studies: human-moderated versus AI-moderated interviews on the same topic, with matched audiences. Compare depth, candor, thematic richness, and impact on the business decision.

Third, governance and transparency will matter enormously. Respondents must know in advance that they are interacting with an AI. Data privacy assurances must be explicit. Trust will be the currency that determines adoption.

Fourth, our own teams will need new capabilities. The skill set of the future insights leader may include designing prompts, supervising AI agents, auditing output quality, and integrating human interpretation with machine-generated synthesis.

The moderator of the future may be part interviewer, part programmer, orchestrating both human and digital voices.

A Place for Both

If there is one lesson from my three decades in this industry, it is that innovation rarely replaces everything that came before. It typically reshapes the mix.

There was a time when online, virtual research was viewed as inferior to in-person research. Today, while both are still used, online is favored. There was skepticism about virtual advisory boards; now they are routine. Each technological shift sparked anxiety about loss of depth, loss of quality, loss of humanity.

And yet, the core objective has remained constant: to generate insight that informs better decisions for patients, physicians, and organizations.

The interaction between people and digital entities will become a dominant theme in marketing research -- not because technology demands it, but because human behavior is adapting to it.

In the end, the better question may not be whether a human or an AI moderator is superior. The better question is situational: under what circumstances does each approach generate the most truthful, decision-useful insight?

There will be contexts where the warmth and instinct of a seasoned human moderator unlock something irreplaceable.

There will also be contexts where the neutrality, scalability, and perceived non-judgment of an AI agent reveal truths that might otherwise stay hidden.

The experiment that Ex Machina dramatized was about whether a machine could convince a human of its humanity. Our experiment is different. It is about whether interaction with a digital entity can produce authentic human insight.

That experiment is already underway. And unlike the film, this one will not end in a single dramatic reveal. It will evolve, study by study, decision by decision.

The real competitive advantage will accrue to companies that learn, early and systematically, when human presence creates value versus when digital precision does.