Reading time: 4 minutes

Today in Brief

Your patients are walking into your practice with diagnoses already in hand (courtesy of ChatGPT).

They've researched their symptoms. They've got a theory. And if your assessment doesn't match what the AI told them at 11pm last night, you've got a trust problem.

This isn't a future scenario. It's happening now. One in four patients under 30 consults AI for health questions. Studies show AI outscores physicians on both diagnostic accuracy and perceived empathy.

Today, we're unpacking what this means for the patient-provider relationship, and what the dentists who want to thrive should do.

Here's what you absolutely need to know.

(TL;DR at the end)

The Trust Inversion

Your Patient Already Consulted AI Before Seeing You. Now What?

A patient sits down. Before you can ask what brings them in, they pull out their phone.

"So I've been having this sensitivity on my lower left molar. I asked ChatGPT about it, and it said it could be cracked tooth syndrome or early pulpitis. It recommended I ask you about a percussion test."

They look up at you expectantly. Not with uncertainty, but with the confidence of someone seeking confirmation.

This is the new reality.

The Shift

One in four adults under 30 now uses AI for health information. One in six adults overall consult ChatGPT monthly for medical advice.

But here's what's different from the old "Dr. Google" days:

they're not getting a list of links to sift through.

They're getting confident, personalized, empathetic answers that sound like they were written specifically for them.

The information isn't necessarily better. But the experience is radically different.

The Empathy Gap

A study had both physicians and ChatGPT answer real patient questions. Licensed healthcare professionals rated the responses.

The results weren't close.

ChatGPT was rated "good or very good" 78% of the time. Physicians? 22%.

For empathy: ChatGPT scored 45%. Physicians scored 4.6%.

Yes, there are limitations :

physician responses averaged 52 words while ChatGPT averaged 211.

More words can feel more caring. But here's what it reveals: patients perceive AI as more patient, more thorough, and more empathetic than rushed clinicians.

One patient put it simply:

"ChatGPT has all day for me, it never rushes me out of the chat."

The Confidence Problem

AI doesn't hedge. It delivers information with an authoritative tone that (as one Stanford researcher put it) "instills confidence that may not be warranted."

Dr. Adam Rodman at Harvard names it directly:

"There is a challenge to doctor authority. LLMs are sycophantic. They can make patients confident while being more wrong about their condition than WebMD ever could."

More confident while being more wrong.

When your professional assessment differs from what the AI told them at home, trust becomes the deciding factor.

The Collision Point

Research shows patients who consult AI before appointments sometimes "get stuck on the AI's diagnosis despite their doctors thinking it's unlikely."

In the old model, the white coat conferred trust by default.

That deference is eroding : not because patients don't want to trust their doctors, but because they now have an alternative source available 24/7 that never makes them feel rushed and never dismisses their concerns.

Meanwhile, trust in physicians has plummeted from 71.5% (2020) to 40.1% (2024). Yet 81% of patients who do trust their doctor say they trust them more than any other source.

Trust, when it exists, still wins. But it must be built.

The Diagnostic Reality

Let's be honest: AI is actually good at this.

A 2024 JAMA study tested 50 physicians on medical cases, half with ChatGPT access, half without.

Results:

  • Physicians without AI: 74% accuracy

  • Physicians with AI: 76% accuracy

  • ChatGPT alone: 90% accuracy

The AI outperformed both groups, including doctors who had access to it but overrode its suggestions.

In dental specialties, AI diagnostic accuracy now ranges from 82% to 95%. For caries detection and oral lesion identification, advanced models exceed 93% accuracy.

This isn't theoretical. It's here. And patients know it.

The New Trust Hierarchy

For patients who already trust their practitioner:

AI becomes a preparation tool, not a replacement. When your recommendation aligns with what AI suggested, trust amplifies. When it differs, they ask questions but defer to the relationship.

For patients without established trust:

AI becomes the primary authority. They weigh the confident AI response against the rushed appointment and the AI often wins.

Interestingly, patients who use AI most frequently are actually more open to AI-augmented care from their providers. They're not anti-doctor. They want both.

The Path Forward

The losing strategy: Ignore AI, dismiss patients who mention it, compete on authority alone.

The winning strategy: Embrace AI as a partner and invest even more in what it can't replicate.

Acknowledge the AI in the room. When a patient mentions ChatGPT, don't dismiss it. "That's a reasonable starting point : let me show you what I'm seeing clinically."

Use AI yourself, visibly for some patients. "I'm going to have our AI do a second analysis of this X-ray." Now you're not fighting AI. You're the person with AI and clinical judgment.

Double down on the human elements. AI can't notice a patient seems more anxious than usual. It can't remember their daughter just started college. It can't pick up on embarrassment about dental hygiene.

The Amplified Practice

This is what I call "The Amplified Practice" will rise, where technology multiplies human impact rather than replacing it.

Every decision enhanced by AI insights. Every diagnosis benefiting from both algorithms and experience. Every patient interaction more efficient and more meaningful.

In a world where confident AI answers are freely available, trust becomes the ultimate differentiator.

Your patients are already consulting AI.

You can fight it and lose. Or you can embrace it and become the practitioner who combines AI's power with something no algorithm can replicate: genuine human connection.

And you, as a dental professional?

Did this already happened to you ?

If AI could handle your treatment planning, patient communication, or even clinical decision-making, would you welcome it or resist it?

I'd love to hear your thoughts !

📝 TL;DR :

  • 1 in 4 patients under 30 now consult AI before seeing you. They arrive with conclusions, not questions.

  • AI scores higher than physicians on perceived empathy (45% vs 4.6%) and diagnostic accuracy (90% vs 74%).

  • Trust in doctors has collapsed : from 71% in 2020 to 40% in 2024. But patients who do trust their doctor still trust them over everything else.

  • The danger: AI sounds confident even when wrong. When your diagnosis differs from ChatGPT's, trust decides who wins.

  • The opportunity: Patients who use AI regularly are more open to AI-augmented care, they want both, not either/or.

  • The winning move: Acknowledge the AI, use it visibly yourself, and double down on the human elements AI can't replicate.

  • Bottom line: In a world of free, confident AI answers, trust is your only moat. Build it or lose to the chatbot.

Help This Reach More Dentists

Only if you’ve found this valuable, please share it with a colleague who needs to hear it.

No ads, no sponsors, this newsletter grows when dentists like you forward it to dentists like them.

Thank you. It means way more than you know !

🧞 Your wish is my command.

What did you think of this issue? We'd love to hear your thoughts – and don't hesitate to tell us what you'd like to see next!

Super P.S.: If you'd like to subscribe to the MedIA newsletter or share it with a friend or colleague, it's right here.

thanks ;)

Salim from DentAI

Keep Reading

No posts found