Artificial Intelligence
Pharmaceutical Industry

The AI Conversation in Pharma Is Moving Faster Than Our Fluency

By Noah Pines

Earlier this week, I spent time at the Reuters Pharma USA Conference in Philadelphia -- an annual gathering that brought together leaders from across commercial, R&D, medical affairs, market access, omnichannel, and analytics. It was, in many ways, a snapshot of where the industry is heading. AI-enabled insights, next-best-action engines, synthetic data, avatars to coach reps, omnichannel orchestration...R&D in outer space...all of it on display, often in the same conversation.

It was energizing. It was also, at times, a bit disorienting.

Because as I listened to speaker after speaker, one thing became increasingly clear: we are moving faster than our collective understanding. Terms and phrases like tokens, context windows, embeddings, and copilots were used with ease and confidence by many, particularly those with IT backgrounds, often without explanation. And while there is no shortage of enthusiasm for AI, there remains a quieter gap -- not in interest, but in fluency.

That gap matters more than we might think. Without a working understanding of how these systems actually function, we risk over-trusting outputs, misinterpreting results, or simply missing where the real value lies. In a field like ours, where decisions carry both clinical and commercial consequences -- that’s not a trivial concern.

Building AI Fluency: A Shared Responsibility

This is something we are actively and intensively working on at ThinkGen, with Brian Hull leading the charge. We are investing in building AI fluency across our teams, from our SBU leaders, to our research managers, our field department and in operations. We've brought on and continue our search for the top technical specialists and experts in the field. Because this is not a capability that can sit in a corner of the organization. It has to be understood, at least at a foundational level, by the people who are designing the research, interpreting outputs, advising our clients, and ultimately helping to shape clients' decisions.

At the same time, we see a responsibility -- and frankly, an opportunity -- to help educate our clients and the broader I&A community. That’s part of the motivation behind producing this essay, and what will likely be a series of reflections over the coming months. Not just on terminology, but on what AI can realistically do today, what remains aspirational, and where human judgment continues to do the heavy lifting.

Because not all AI is created equal. Some tasks can now be completed almost instantly, like summarizing literature, identifying patterns, drafting content. Others still require iteration, context, and careful oversight. And part of becoming fluent is understanding that distinction: what happens at the touch of a button, and what still takes time, expertise, and discernment.

Why This Shift Matters for Insights & Commercial Teams

For those of us in I&A, and for our colleagues and stakeholders in commercial roles, this shift is particularly significant. AI is not just another tool in the toolkit. It is becoming embedded in how we engage with respondents, develop our research materials, generate insights, and support decision-making.

But unlike traditional analytics, which tend to be deterministic, AI is probabilistic. It doesn’t simply retrieve answers; it generates them. And that introduces both power and risk.

To navigate that, we need a shared language and an understanding of the basic concepts.

The Core Concepts: A Very Preliminary, Practical Lexicon

When I reflect on the conversations at PharmaUSA -- and candidly, on my own experience of having to look up terms in real time -- there are a handful of concepts that stand out as foundational. Not because everyone needs to become an expert, but because these ideas shape how AI behaves.

A “model,” for example, is the engine behind all of this: a system trained on large datasets to recognize patterns and produce outputs. When we talk about large language models, or LLMs, we’re referring to models trained specifically on text, capable of generating language that feels remarkably human. But these systems don’t “know” things in the way we do; they predict what is likely to come next based on patterns in data.

That’s where the concept of a “prompt” becomes so important. The input we give the model -- the query, the instruction, the context -- has an outsized influence on what we get back. In many cases, what appears to be a limitation of the technology is actually a limitation of how it’s being asked to perform.

Underneath that interaction are concepts like “tokens” and “context windows,” which govern how much information a model can process at one time. These may sound like technical details, but they have very practical implications. A model can only “see” a finite amount of information at once, which affects how it interprets complex inputs like patient journeys or large datasets.

Then there are concepts like “embeddings,” which allow models to understand relationships between ideas, and “retrieval-augmented generation,” or RAG, which helps ground outputs in real, verified data sources -- something that is particularly important in a regulated industry like ours. Without that grounding, we run into one of the most widely discussed (and misunderstood) aspects of AI: hallucination, which is the tendency of models to generate responses that sound plausible but are not actually correct.

None of this is to suggest that the technology is unreliable. In fact, it is often remarkably capable. But it does reinforce a central point: these systems require interpretation. They require oversight. And they require users who understand enough to question what they’re seeing and why they're seeing it.

Optimism, Skepticism and the Space In Between

Another observation, both from the conference and from our ongoing work at ThinkGen, is the wide spectrum of perspectives on AI. There are those who see it as transformative in nearly every application. There are those who approach it with measured pragmatism. And there are still skeptics, who question not just the outputs, but the broader implications. Indeed, one of our AI directors recalls some contentious discussions with our researchers at the recent annual company summit.

At ThinkGen, we see value in that full spectrum. We are, broadly speaking, optimistic about what AI can enable. But we are also very intentional about creating space for skepticism: about questioning assumptions, pressure-testing outputs, and being clear-eyed about limitations.

Because in our experience, the most productive conversations tend to happen in that middle ground.

Where This Gets Real for Pharma

We are already seeing AI applied in areas that matter, for example synthetic respondents, AI moderation in marketing research; next-best-action engines in commercial strategy; AI-assisted content generation; and patient engagement platforms.

In many cases, the technology is advancing faster than our governance frameworks, our validation processes, and our shared understanding. That creates both opportunity and risk.

Which is why, in my view, the conversation we need to be having is not just about what AI can do, but about how we use it responsibly. How we validate outputs. How we integrate it into workflows. And how we ensure that, even as these tools become more embedded, the ultimate responsibility for decisions remains where it belongs.

From Tools to Judgment

If there is one theme that continues to emerge, it is this: AI is not replacing human judgment. But it is reshaping where and how that judgment is applied.

For those of us in I&A, that shift is profound. Our role is evolving from data collection and reporting to something closer to interpretation, validation, and guidance in an AI-augmented environment.

And that evolution starts with fluency: not technical mastery, but enough understanding to ask better questions, challenge outputs, and recognize both the potential and the limits of what we’re working with.

Because ultimately, the decision, and the accountability, still sits with us.

Always Keen to Hear from You

I’d be very interested to hear how others are experiencing this.

What concepts are you still working to understand? What terms would you add to this shared lexicon? And how are you building fluency within your own teams?

More to come on this topic.

#AIinPharma #InsightsAndAnalytics #PharmaCommercial #DigitalHealth #Omnichannel #HealthcareInnovation #MarketResearch #FutureOfWork