Artificial Intelligence

Are You Measuring AI in the Patient Journey?

By Noah Pines

During the past year, I’ve added a new line of questioning to nearly every qualitative interview I conduct with physicians, patients, and care partners:

“How is the exam room conversation changing now that patients have access to, and are bringing, AI into the visit?”

Not “Dr. Google.” Not a Facebook group. Not a printout from WebMD.

ChatGPT, Grok, or Gemini. A conversational model that can summarize, translate, compare, draft questions, and -- when prompted -- sound uncannily confident about clinical consideration and trade-offs.

The answers I’m hearing are uneven, sometimes contradictory, and often still anecdotal. But the direction is consistent: medical encounters increasingly can be characterized as "triangular." It’s no longer just clinician and patient. It’s clinician, patient, and whatever the patient asked an AI model the night before...or in the waiting room or parking lot.

From a commercial and I&A standpoint, this represents a material change in patient behavior. It affects the informational baseline patients bring into the exam room, the confidence with which they engage, and the decision dynamics that follow.

From “Monopoly Over Knowledge” to Shared Information

Michael Millenson, writing in STAT, frames this moment in a way I find instructive: medicine has historically held a monopoly over a body of knowledge that laypeople couldn’t readily access. That model is eroding as AI helps patients “find, create, control and act upon” a deeper and more personalized set of health information. In his words, the monopolistic medical model is collapsing -- and the system needs a new doctor–patient relationship to prevent democratized data from devolving into disarray.

His proposed alternative, which he calls “collaborative health” -- rests on three pillars: shared information, shared engagement, and shared accountability.

That framework matters for our industry because it reframes what’s going on. This isn’t simply patients getting “more information.” It’s a redistribution of interpretive power, enabled by tools (LLMs) that speak in natural language and can personalize outputs in seconds. The big question is whether the system evolves toward constructive partnership; or toward confusion, mistrust, and misalignment.


The Near-Term Reality: Patient Education and Engagement Is Where AI Is Landing First

The New England Journal of Medicine’s 2025 perspective on generative AI in medicine addresses the question of where the early traction is happening: patient education and engagement, information synthesis, and certain workflow improvements (i.e., digital scribe, documentation, coding).

That matches what we’re seeing on the ground in our everyday qualitative and quantitative marketing research survey. Patients aren’t waiting for FDA-cleared clinical decision support to become commonplace. They’re using general-purpose LLMs right now to:

  • Interpret symptoms,
  • Understand lab flags,
  • Translate clinical language into plain English,
  • Draft portal messages,
  • Prepare questions for visits.

This is important: patient-facing adoption doesn’t require institutional rollouts. It spreads one conversation at a time.

The Trust Problem Will Shape the Trajectory (and It’s Not a Small One)

Here’s the problem though: as usage rises, trust is not keeping pace.

A nationally representative US survey study published in JAMA Network Open found that 65.8% of respondents reported low trust in their health care system to use AI responsibly, and 57.7% reported low trust that their system would ensure AI would not harm them.

Medscape highlighted these findings with a blunt takeaway: patients are skeptical, they want transparency, and they don’t want AI used in care without their knowledge.

So we’re entering an era where:

  • Patients increasingly use AI independently.
  • Patients are wary of institutions using AI without disclosure.
  • Baseline trust in the health system is already strained.

That’s not an abstract ethical debate. It will influence adherence, receptivity to education, willingness to enroll in programs, and the “temperature” of the physician–patient conversation.

The Practical Use Case That’s Quietly Exploding: Lab Results

If you want a concrete example of how this is happening, look at labs and portal data.

A KFF Health News report (also distributed via NPR) talks about patients uploading lab results into models like Claude or ChatGPT while they wait for their HCPs to respond -- sometimes to reduce anxiety, sometimes to prepare better questions.

The piece also emphasizes the two risks HCPs and patient advocates keep raising:

  1. Accuracy and “hallucinations” (confident-sounding wrong answers).
  2. Privacy (sensitive health data being shared with tech companies, often outside traditional health privacy frameworks).

The pattern is telling: the demand isn’t for “AI diagnosis.” It’s for interpretation on demand in moments when the system is slow, opaque, or hard to reach. That’s a behavioral wedge that will widen as access to records expands and expectations of immediacy rise.

What This Does to the Exam Room: The Conversation Is Being Rewritten Upstream

When patients and their care partners show up having pre-processed their situation through an AI model, three things start to happen:

1) The baseline informational starting point shifts

Patients ask narrower, more specific questions. They use more technical language -- sometimes accurately, sometimes not. Either way, the HCP must respond to the language the patient uses.

2) The authority dynamic changes

Even when the model is imperfect, it often sounds certain. That can reinforce a plan -- or challenge it. HCPs are increasingly being asked not just “what should we do?” but “why is your recommendation better than what this LLM suggested?”

3) The visit absorbs a new task: reconciling narratives

HCPs now have to do interpretive work on top of clinical work: correcting misconceptions, validating legitimate concerns, and rebuilding trust when the patient feels dismissed.

From an industry lens, this is the crucial point: if the dialogue changes, the levers that shape therapy decisions change too.

Implications for Biopharma and Medtech: Four Shifts to Anticipate

Shift 1: Evidence quality becomes patient-visible

Patients and their care partners will increasingly ask AI to compare amongst products' clinical evidence, safety profiles, real-world experience, and maybe even MOAs. If your evidence story is fragmented or inconsistent across channels, the AI-synthesized version will be too.

Shift 2: Plain language becomes a strategic capability

NEJM highlights the promise of GenAI for translating and simplifying medical content. Patients will use AI to “translate” trial results and prescribing information into basic decision-relevant language. Companies that invest in clarity, without losing rigor, will reduce misinterpretation risk and build credibility.

Shift 3: “AI touchpoints” need to be added to patient journey and HCP influence models

If your journey map still jumps from “search online” to “visit HCP,” it’s missing an increasingly common step: “consult AI.” I&A teams should start measuring:

  • Whether patients used AI pre-visit,
  • What they asked, how the patient worded the prompt,
  • Whether it changed expectations,
  • Whether they disclosed it to the clinician,
  • As a result, how they reacted to what their HCP recommended.

Shift 4: Trust and disclosure become commercial variables, not just compliance issues

Millenson’s “collaborative health” pillars are a useful lens here. Shared information and engagement can’t work if the patient believes AI is being used on them rather than with them. And JAMA Network Open’s trust data suggests the default stance is skepticism. Programs that quietly embed AI without transparent framing may find they’ve created friction, not value.

What Insights and Analytics Teams Should Be Watching Now

If I had to pick a short list of leading indicators worth tracking over the next 12–18 months, it would be these:

  1. Prevalence of patient AI use by therapy area (oncology, immunology, rare disease, chronic cardiometabolic conditions will likely behave differently).
  2. Impact on initiation and switching conversations (is AI introducing alternative regimens earlier?).
  3. Disclosure rates (do patients tell clinicians they used AI, and what prompts disclosure?).
  4. Trust segmentation (AI users are not uniformly “pro-AI”; many are hesitant).
  5. Breakpoints for confusion (which concepts are most frequently misinterpreted when filtered through LLMs, e.g., AE profiles, contraindications, biomarkers, trial eligibility, payer rules).

This is also where primary research needs to evolve. We shouldn’t treat AI as a generic “digital behavior.” We need to understand it as a specific conversational influence layer -- one that is shaping question formation and confidence before the visit even starts.

The Industry Opportunity: Help the System Move Toward Collaboration, Not Disarray

Millenson’s warning is worth taking seriously: tearing down a hierarchy can produce confusion as easily as constructive change.

Biopharma and medtech won’t “own” the patient–AI relationship, and probably shouldn’t try. But industry can influence whether this era becomes one of escalating mistrust or of better shared decision-making.

In practical terms, that means:

  • Investing in evidence narratives that remain coherent when summarized by machines,
  • Supporting clinician–patient communication rather than adding noise,
  • Taking transparency seriously (especially where AI touches patient-facing services),
  • and treating patient AI adoption as a measurable, segmentable market dynamic—starting now.

“Dr. ChatGPT” isn’t replacing clinicians. But it is reshaping the encounter.

And if your job is to understand behavior and design strategies around it, you don’t need certainty to act -- you need a disciplined way to observe, measure, and adapt before the curve steepens.


References

  1. Maddox TM, Embí PJ, Gerhart J, Goldsack JC, Parikh RB, Sarich TC. Generative AI in Medicine — Evaluating Progress and Challenges. New England Journal of Medicine. 2025;392:2479–2483. doi:10.1056/NEJMsb2503956
  2. Nong P, Platt J. Patients’ Trust in Health Systems to Use Artificial Intelligence. JAMA Network Open. 2025;8(2):e2460628. doi:10.1001/jamanetworkopen.2024.60628
  3. Markman M. AI in Healthcare Faces Growing Skepticism Among Patients. Medscape. May 21, 2025. (Discussing JAMA Network Open studies on trust and notification preferences.)
  4. Millenson ML. Medicine’s AI Era Urgently Demands New Doctor-Patient Relationship: The Monopolistic Medical Model Is Collapsing. Here’s What Comes Next. STAT. August 13, 2025.
  5. Ruder K. Running Your Lab Results by ChatGPT? Here’s What to Keep in Mind. KFF Health News (distributed by NPR). September 11, 2025.