AADOM PODcast – When AI Sounds Confident But Gets HR Completely Wrong

Episode Summary

Public AI tools sound confident, polished, and authoritative—but in HR, compliance, and employment law, “sounds good” is nowhere near good enough. Dental practices are increasingly relying on AI to write policies, research legal questions, or handle corrective action, unaware that these tools blend outdated information, skip legally required steps, and produce answers that change based on how the question is asked.

In the past year, this has led to real harm, including a payroll company distributing an AI-generated email about a “new law” that never existed and practices disciplining protected employees based on flawed AI guidance.

This session uncovers why public AI gets HR wrong, how employees are using AI to bring misinformation into the office, and how these errors create serious liability for dental practices. Attendees will learn how to reset expectations with their team, recognize when AI has produced fiction instead of compliant guidance, and establish clear internal guardrails.

We’ll also explore the safe alternative: a closed-loop, expert-supervised AI system built on vetted HR content—illustrating the difference between risky public AI and controlled AI that actually protects your practice.

Episode Notes

Paul Edwards is the founder and CEO of CEDR HR Solutions. He and his team are leading providers of one-on-one expert HR guidance, custom employee handbooks, and management education for more than 3000 private dental practices across the US. You can join Paul on his popular HR Podcast, “What the Hell Just Happened?!”

CEDR is the All-in-One People Problem HR solution for dental practices. Custom Employee Handbooks. Unlimited HR Support. Simplified Software Tools. Seamless Payroll. Tailored for Your Practice and Your Team. Whether you need a compliant Employee Handbook for 1 or 200 employees, user-friendly HR software, or expert answers to people problems, they’re here for you. As a CEDR member, you can leverage comprehensive software, HIPAA training, personalized support, and expert HR coaching for healthcare business owners and managers.

Learn more about CEDR HR Solutions

Learn More About AADOM

 

When AI Sounds Confident But Gets HR Completely Wrong: Why Dental Practices Need Guardrails

Most practice owners and managers have now experimented with the big public AI tools. Type in a question about a policy, a sticky employee issue, or a compliance concern, and seconds later you get a polished, confident answer. It feels like magic. It feels like help. It feels like expertise.

But here is the uncomfortable truth: in HR, compliance, and employment law, “sounds good” is not good enough.

Over the past year, we’ve seen a sharp rise in doctors and office managers relying on public AI tools to write policies, research legal questions, or draft corrective action documents. Many of these AI-generated answers look professional, cite legal concepts, and feel authoritative. Yet, behind the scenes, the tool has no idea whether the answer is correct for your state, size, industry, specific employee, or your obligations under federal law.

The result?

Many practices are unknowingly implementing policies and decisions that are legally risky, noncompliant, or based on flat-out fiction.

And in one case, thousands of practices received an official-looking compliance alert about a “new law” that never existed.

Let’s start there.

When a Payroll Giant Used AI—and Accidentally Made Up a Law

This past year, we saw what was a major payroll company’s experiment with AI in its sales and marketing processes. The goal was simple: generate a sense of urgency around compliance updates so prospects would feel motivated to schedule a call.

We can only guess that they fed a public AI model a prompt along the lines of:
“Write a strong compliance alert about new HR legal changes that employers need to act on immediately.”

The AI did precisely what it was asked to do: it produced a polished, authoritative email warning businesses about a new state requirement for harassment training. It cited “updates to the law,” outlined implementation deadlines, and described potential fines for non-compliance.

The only problem? The law didn’t exist.

The AI had blended pieces of outdated proposals, draft bills that were never passed, and websites speculating about possible legislation. The salesperson, who was not an HR or legal professional, skimmed the email, thought “Wow, this sounds official and AI is brilliant,” and blasted it out to thousands of businesses.

Immediately, practice owners panicked.
“Why didn’t anyone tell us this new requirement had gone into effect?”
“Are we out of compliance?”
“Are we about to get fined?”

Because we don’t use scare tactics and know AI can not be trusted, the truth was clear: the bill died in committee. Nothing had passed. No requirement existed.

This is a perfect snapshot of the problem: public AI tools produce confident answers, not correct answers. They mimic the tone of expertise while bypassing the substance of it entirely.

And in HR and employment law, that gap is dangerous.

Why HR Is a Terrible Place to Use Public AI

In medicine and dentistry, a diagnosis requires asking questions, gathering history, understanding context, and interpreting symptoms in a bigger picture. HR works exactly the same way.

A correct HR answer depends on details such as:

  • What state you are in
  • How many employees you have
  • Whether the person involved is pregnant, disabled, or on protected leave
  • Whether your city or county has its own ordinances
  • How past issues were documented
  • Whether a policy exists and how it was applied

Public AI tools rarely ask any of these clarifying questions. Instead, they answer whatever is typed, and they answer with total confidence—even when the answer is wrong.

That results in three consistent failures:

1. They skip required legal steps.

For example, when a frustrated manager asks AI to “write a corrective action for a pregnant employee who keeps coming in late,” the AI generates a strong-sounding disciplinary letter. But it completely ignores pregnancy as a protected status, the need to explore accommodations, or the requirement to consider medical guidance.

It simply mirrors the frustration embedded in the prompt.

2. They blend outdated or incorrect information.

Public AI models are trained on the entire internet—good information, old information, wrong information, speculative information, and commentary written by non-experts.

When you ask a legal question, it cannot separate:

  • A draft bill from an actual law
  • A 2024 requirement from a 2011 one
  • A hospital policy from what applies to a six-employee dental office

3. They reflect the bias of whoever asks the question.

If you ask with frustration, it writes with frustration.
If you ask with fear, it writes warnings.
If you ask, “Is ozone exposure dangerous during pregnancy?” it may overstate the risks.
If you ask, “Is the ozone system we use safe?” it may downplay the concerns.

Same tool. Different prompts. Opposite answers.

None of this is malice. It’s simply how public AI works. It predicts the next most likely sentence—it doesn’t understand compliance, nuance, or consequences.

And in HR, that’s exactly what you cannot afford.

Employees Are Using AI Too—and Bringing That Information to You

More employees are pasting symptoms, concerns, schedule issues, safety worries, and even workplace complaints into AI and handing the result to their doctor, manager, or HR contact.

Some of these AI printouts are:

  • Alarmist
  • Incomplete
  • Legally incorrect
  • Based on generic scenarios that don’t match healthcare settings

When this happens, the worst response is to dismiss or argue with the AI result. That creates friction with the employee.

The right response is simple:

“Thanks for bringing this to me. AI can be interesting, but it’s not our HR department. Let me run this by CEDR and check what actually applies to our practice.”

That single sentence resets expectations and keeps the conversation grounded in real policy and real law—not AI hallucinations.

So, How Should Practices Use AI for HR? The Answer: They shouldn’t use it, just like we should not use it to pretend to be medical professionals.

Practices should not be using public AI for:

  • Policies
  • Handbooks
  • Corrective action
  • Terminations
  • Leave questions
  • Accommodation issues
  • Safety concerns
  • Payroll or wage and hour decisions

Instead, the safe approach is:

Use your human HR team (not the one that approved that totally incorrect sales email). Let us use AI behind the scenes in a controlled, closed loop.

Because here is what makes CEDR’s approach different:

1. Our AI is sequestered.

It cannot see the internet.
It cannot mix good and bad sources.
It reads only one library: our own 26 years of compliance guidance for small healthcare practices. We have more than a hundred thousand accurate and vetted answers to HR questions and thousands of supporting hours of documented research to draw from.

2. It is narrow, not general.

It is not allowed to decide legal questions.
It is not allowed to interpret medical issues.
It is not allowed to guess.
It simply helps our HR experts search, organize, and pull from our vetted materials faster.

3. Humans approve everything.

CEDR’s HR experts review, edit, and finalize every answer.
Nothing goes to a member “because AI wrote it.”
It goes out because an expert stands behind it.

Members never see the AI. They only see the expertise.

This is the opposite of how public AI works—and the opposite of the payroll company’s scare-email fiasco.

The Bottom Line: AI Is Everywhere, but It Cannot Replace HR Judgment

There is no putting the AI toothpaste back in the tube.
Employees will use it.
Vendors will use it.
Your managers may be tempted to use it.

But the difference between safe and unsafe AI use is simple:

Public AI is not safe for HR.

Your practice should not have to figure this out alone.
You do not need to learn AI prompts.
You do not need to evaluate whether a chatbot is correct.

You need a partner who handles this responsibly, within clear guardrails, drawing on decades of expertise.

That is what keeps your practice protected. That is what keeps your team aligned. And that is what prevents the next “AI-invented law” from landing in your inbox and sending your office into a panic, or creating a policy that is then used to sue you.

 

Elevate Your Job to Your Career with AADOM's Dental Management Training.

 

Leave a comment:

Your email address will not be published. Required fields are marked *

*