Reading time: 4 minutes
Today in Brief
A few editions back, we shared a simple filter for telling dental AI tools worth keeping from AI tools that aren't: "Is this solving something I couldn't do before, or just replacing something that already worked?"
Dental radiograph AI passed it easily. Pearl, Overjet, VideaHealth — color-coded cavities, lesions you'd never spot with the naked eye, bone loss measured to the pixel. Stuff we genuinely couldn't do before. It made the keep list.

That much is hard to argue with.
But there's a layer underneath that's worth pulling at. Because if you actually read what Pearl and Overjet sell (not the demo, the marketing copy) the headline numbers aren't framed around diagnostic accuracy at all.
They're framed around case acceptance.
Which opens a more interesting question worth sitting with: what is this tool actually solving, and is that the same thing it's solving in your practice?
Today, let's lay out the angles. How the tech actually works, what the studies measure, what the marketing emphasizes, and where the gap between those might be worth thinking about.
Here's what you absolutely need to know.
(TL;DR at the end)
How these tools actually work
The quick version if you’re curious.
These tools are convolutional neural networks (CNNs), trained on millions of annotated radiographs. The system doesn't "understand" a tooth. It recognizes pixel patterns that statistically correlate with a carie, a bone loss, an apical lesion.
That's it.

Pearl and Overjet are both FDA-cleared on caries and bone loss. Pearl just got cleared for panoramic X-rays in January 2026. Well-validated, broadly accepted.
On paper, AI catches what most clinicians miss
The numbers tend to be bigger than most people assume.
In Schwendicke's reference meta-analysis (117 studies, 13,000+ teeth), the average dentist's sensitivity for detecting early proximal caries on radiographs lands around 24%. Three out of four early lesions slip through.

The ADEPT study took it further: 23 dentists, controlled trial. With AI assistance, detection on enamel-only proximal caries went from 44% to 76% — a 71% lift in sensitivity.
For more advanced lesions, human sensitivity climbs back up. But across the board, AI stays more consistent — same patient, same image, same answer, every time.
One way to read it: the diagnostic upside is real and well-documented.
What Pearl and Overjet actually advertise
Now look at their marketing pages. The hero metrics aren't "we lifted your diagnostic sensitivity by 71%."
They're:
+30% case acceptance (Pearl)
+25% case acceptance (Overjet)
$150,000 in additional revenue in the first month (Rand Center case study, published by Pearl)
Stronger insurance documentation
Fewer claim denials
The diagnostic engine is real. But the business case is sold downstream — in visual proof for the patient and a stronger paper trail for the insurer.
The colored overlay turns "trust me, it's there" into "look — right here." A patient who sees the lesion tends to accept treatment faster. An insurer who receives a labeled, measured lesion tends to contest less.
So the question isn't whether there's value. It's where the value sits in your specific practice.
The trust dynamics worth thinking about
If a big part of radiograph AI's value is proving the diagnosis to the patient, it's worth asking: how often are your patients actually doubting your diagnosis?
For some practices the answer is "often" — new patients, no relationship yet. For others, "rarely, once the patient sees the image."
Where patients tend to hesitate isn't always the diagnosis itself. It's often one step downstream: the plan.
Filling or onlay? Onlay or crown? Crown or extraction-and-implant? Those decisions are where second opinions show up. And the variability isn't imagined.
The well-known Reader's Digest piece "How Dentists Rip Us Off" (1993): a journalist had the same mouth examined by 51 American practices. Treatment plans ranged from one crown and four fillings to full-mouth reconstruction.
A 2023 pilot study in Victoria, Australia, found that 61% of practitioners changed their treatment plan on complex cases after seeing a peer consensus opinion.
A useful framing, then:
radiograph AI addresses the upstream step (what's there). The downstream step (what to do about it) is a separate problem, with separate dynamics.
Your homework this week
Before signing (or renewing) a radiograph AI contract, one question worth sitting with:
In my practice, how often does case rejection come from a patient doubting what they see on the X-ray, vs. doubting what I'm recommending we do about it?
If most of your friction is upstream, the case acceptance numbers in the studies probably translate well. If most of your friction is downstream, the lift may be smaller — still useful, just for different reasons.
Either way, worth knowing which one you're actually buying for.
TL;DR
Pearl, Overjet, VideaHealth all use CNNs trained on millions of annotated radiographs. FDA-cleared, technically solid.
Dentist sensitivity on early caries lands around 24%. AI assistance lifts it by ~71%. The diagnostic upside is documented.
What the companies emphasize commercially isn't diagnostic accuracy — it's +25-30% case acceptance and stronger insurance documentation.
Both effects are real. The interesting question is where each lift actually shows up in your practice.
Patient trust friction tends to split between two places: the diagnosis (does the problem exist?) and the plan (is this the right way to fix it?).
From the 1993 Reader's Digest experiment to a 2023 Australian peer-consensus study, the same patient often gets very different plans depending on who's reading the case — a different problem from image reading.
The "is this tool right for me" answer depends a lot on which of those two is your bigger blocker.
That's it for today. Hope this gave you a clearer view on where radiograph AI fits — and where it doesn't.
See you next time!
Salim, Co-Founder at DentAI SA
P.S. Hit reply with one word. DIAGNOSIS if the biggest acceptance blocker in your practice is patients doubting what they see on the X-ray. PLAN if it's patients doubting what you recommend they do about it. Curious where the audience actually sits. Enough replies and next edition turns into the data.







