During the past year, I’ve added a new line of questioning to nearly every qualitative interview I conduct with physicians, patients, and care partners:
“How is the exam room conversation changing now that patients have access to, and are bringing, AI into the visit?”
Not “Dr. Google.” Not a Facebook group. Not a printout from WebMD.
ChatGPT, Grok, or Gemini. A conversational model that can summarize, translate, compare, draft questions, and -- when prompted -- sound uncannily confident about clinical consideration and trade-offs.
The answers I’m hearing are uneven, sometimes contradictory, and often still anecdotal. But the direction is consistent: medical encounters increasingly can be characterized as "triangular." It’s no longer just clinician and patient. It’s clinician, patient, and whatever the patient asked an AI model the night before...or in the waiting room or parking lot.
From a commercial and I&A standpoint, this represents a material change in patient behavior. It affects the informational baseline patients bring into the exam room, the confidence with which they engage, and the decision dynamics that follow.
Michael Millenson, writing in STAT, frames this moment in a way I find instructive: medicine has historically held a monopoly over a body of knowledge that laypeople couldn’t readily access. That model is eroding as AI helps patients “find, create, control and act upon” a deeper and more personalized set of health information. In his words, the monopolistic medical model is collapsing -- and the system needs a new doctor–patient relationship to prevent democratized data from devolving into disarray.
His proposed alternative, which he calls “collaborative health” -- rests on three pillars: shared information, shared engagement, and shared accountability.
That framework matters for our industry because it reframes what’s going on. This isn’t simply patients getting “more information.” It’s a redistribution of interpretive power, enabled by tools (LLMs) that speak in natural language and can personalize outputs in seconds. The big question is whether the system evolves toward constructive partnership; or toward confusion, mistrust, and misalignment.
The New England Journal of Medicine’s 2025 perspective on generative AI in medicine addresses the question of where the early traction is happening: patient education and engagement, information synthesis, and certain workflow improvements (i.e., digital scribe, documentation, coding).
That matches what we’re seeing on the ground in our everyday qualitative and quantitative marketing research survey. Patients aren’t waiting for FDA-cleared clinical decision support to become commonplace. They’re using general-purpose LLMs right now to:
This is important: patient-facing adoption doesn’t require institutional rollouts. It spreads one conversation at a time.
Here’s the problem though: as usage rises, trust is not keeping pace.
A nationally representative US survey study published in JAMA Network Open found that 65.8% of respondents reported low trust in their health care system to use AI responsibly, and 57.7% reported low trust that their system would ensure AI would not harm them.
Medscape highlighted these findings with a blunt takeaway: patients are skeptical, they want transparency, and they don’t want AI used in care without their knowledge.
So we’re entering an era where:
That’s not an abstract ethical debate. It will influence adherence, receptivity to education, willingness to enroll in programs, and the “temperature” of the physician–patient conversation.
If you want a concrete example of how this is happening, look at labs and portal data.
A KFF Health News report (also distributed via NPR) talks about patients uploading lab results into models like Claude or ChatGPT while they wait for their HCPs to respond -- sometimes to reduce anxiety, sometimes to prepare better questions.
The piece also emphasizes the two risks HCPs and patient advocates keep raising:
The pattern is telling: the demand isn’t for “AI diagnosis.” It’s for interpretation on demand in moments when the system is slow, opaque, or hard to reach. That’s a behavioral wedge that will widen as access to records expands and expectations of immediacy rise.
When patients and their care partners show up having pre-processed their situation through an AI model, three things start to happen:
Patients ask narrower, more specific questions. They use more technical language -- sometimes accurately, sometimes not. Either way, the HCP must respond to the language the patient uses.
Even when the model is imperfect, it often sounds certain. That can reinforce a plan -- or challenge it. HCPs are increasingly being asked not just “what should we do?” but “why is your recommendation better than what this LLM suggested?”
HCPs now have to do interpretive work on top of clinical work: correcting misconceptions, validating legitimate concerns, and rebuilding trust when the patient feels dismissed.
From an industry lens, this is the crucial point: if the dialogue changes, the levers that shape therapy decisions change too.
Patients and their care partners will increasingly ask AI to compare amongst products' clinical evidence, safety profiles, real-world experience, and maybe even MOAs. If your evidence story is fragmented or inconsistent across channels, the AI-synthesized version will be too.
NEJM highlights the promise of GenAI for translating and simplifying medical content. Patients will use AI to “translate” trial results and prescribing information into basic decision-relevant language. Companies that invest in clarity, without losing rigor, will reduce misinterpretation risk and build credibility.
If your journey map still jumps from “search online” to “visit HCP,” it’s missing an increasingly common step: “consult AI.” I&A teams should start measuring:
Millenson’s “collaborative health” pillars are a useful lens here. Shared information and engagement can’t work if the patient believes AI is being used on them rather than with them. And JAMA Network Open’s trust data suggests the default stance is skepticism. Programs that quietly embed AI without transparent framing may find they’ve created friction, not value.
If I had to pick a short list of leading indicators worth tracking over the next 12–18 months, it would be these:
This is also where primary research needs to evolve. We shouldn’t treat AI as a generic “digital behavior.” We need to understand it as a specific conversational influence layer -- one that is shaping question formation and confidence before the visit even starts.
Millenson’s warning is worth taking seriously: tearing down a hierarchy can produce confusion as easily as constructive change.
Biopharma and medtech won’t “own” the patient–AI relationship, and probably shouldn’t try. But industry can influence whether this era becomes one of escalating mistrust or of better shared decision-making.
In practical terms, that means:
“Dr. ChatGPT” isn’t replacing clinicians. But it is reshaping the encounter.
And if your job is to understand behavior and design strategies around it, you don’t need certainty to act -- you need a disciplined way to observe, measure, and adapt before the curve steepens.