What HR Leaders Should — and Shouldn’t — Trust AI to Do

September 15, 2025
Brian Smith, Vice President of Product
Timer  Read Time: 8 minutes
Artificial intelligence (AI) has arrived in the benefits space. Whether it’s chatbots that guide employees through open enrollment or tools that generate tailored communications, AI is quickly transforming how organizations deliver employee benefits. 

For HR teams, the appeal is clear: AI eliminates the burden of repetitive tasks, allows 24/7 support, and personalizes the benefits experience at scale. A 2024 Gartner survey found that within only six months, the number of HR teams conducting AI pilots or testing concepts has doubled. 

But beneath this technological promise lies a complex reality. While AI can streamline benefits administration, it may also perpetuate poor decision-making or introduce new forms of bias that can undermine equity and accessibility. The same algorithms designed to democratize benefits information can inadvertently create barriers for employees most in need of support.

When AI systems fail to account for linguistic diversity, cultural differences, or digital literacy, the consequences extend beyond frustrated employees. This impacts employee welfare, puts regulatory compliance at risk, and undermines organizational trust.

The challenge isn’t whether to adopt AI, but how to implement it safely and equitably. Here are some important considerations as you bring AI into your HR practices.

AI on a computer keyboard

AI should represent everyone, not just a few

Most generative AI systems are trained primarily on English-language, Western-centric datasets, limiting diverse workforces from the jump. Across models, outputs tend to reflect values of English‑speaking and Protestant European countries more than those of other regions. For example, an AI assistant might assume “family” always means a two-parent household, or that “time off” refers to Christian holidays — norms that don’t apply universally.

If your AI assistant is helping employees to interpret policies, choose plans, or get guidance about sensitive topics, it may default to Western norms about family, work, privacy, or health. It’s important to institute checks to ensure what’s being shared is culturally relevant to each individual.

AI shouldn’t perpetuate stereotypes

Generative models can reinforce stereotypes through language tone, confidence levels, or the complexity of explanations provided to different user groups. These biases may be imperceptible to casual users but create measurable differences in user experience and outcomes. 

Worse, some AI systems have been accused of outright discrimination. Even subtle biases can erode employee trust in AI-powered tools. When workers perceive that AI treats them differently based on their demographics, it can damage the employer-employee relationship and reduce participation in valuable benefits programs.

As benefits play a critical role in how people understand and access healthcare, HR leaders must thoroughly vet their AI tools and partnerships to ensure they’re not perpetuating these harmful stereotypes. It’s also important to remember that AI is not the answer to every problem. In some cases, a well-established deterministic model (like a rules-based system for eligibility checks) may actually be more reliable and low-risk than a newer, untested AI solution.

AI should be accessible by people of all abilities

Many AI-powered benefits tools aren’t built with accessibility in mind. Employees who use screen readers, voice commands, or need simplified interfaces are often given no way to navigate these systems effectively.

New federal rules make this issue more urgent. As of April 2024, all workplace websites and apps must meet strict accessibility standards.

When employees with disabilities can’t access AI benefits tools, not only do companies put themselves at risk of compliance failures under the Americans with Disabilities Act. More importantly, these workers may miss out on benefits they need and deserve. Therefore, HR leaders must prioritize accessible solutions.

AI shouldn’t reinforce the digital divide

As valuable as AI tools are, it’s imperative they do not assume all employees have the same level of comfort or familiarity with AI technology, or even technology overall.  Algorithmic biases like this can affect how employees access and engage with their benefits. For example, people with limited experience navigating benefits programs may find broad AI-generated explanations overly technical or hard to follow. Others may not feel comfortable engaging with AI tools at all. 

Either scenario illustrates how AI tools could undermine equitable access to benefits or degrade the employee experience. When introducing AI solutions, companies must make sure they’re just as easy to use for people who engage with AI on a daily basis and people who’ve never even heard of it. In good news, some fixes are simple — for example, instructing the system to answer at an eighth-grade reading level, or provide plain-language explanations for technical terms. These can go a long way toward making AI guidance more inclusive.

AI should be dependable

Automation bias is an often overlooked risk with AI tools. This happens when users put too much trust in automated systems and accept their outputs without question. These tools are only as good as the data sets that they’re trained on. 

Incorrect AI guidance in benefits administration can lead to missed coverage opportunities, inappropriate plan selections, or enrollment in programs for which employees aren’t eligible. These errors often aren’t discovered until employees need to use their benefits, requiring urgent HR intervention and creating potential liability for the organization. 

Which leads us to a final consideration HR leaders must reconcile: while powerful, AI is simply not appropriate for every single use case (at least yet).

AI in HR: What’s possible today, and where to caution

HR leaders are already using AI for many tasks, including document verification, eligibility, messaging and communications, and chat support. In these ways and more, AI improves the benefits experience while also closing the gap for human error, helping HR teams move faster and with more accuracy. But not all services should be left to AI.

As we integrate AI within PlanSource, our team carefully assesses when and how we can provide the best, always-on, automated experiences in a way that doesn’t lead to the above risks. For example, one of our objectives is to continually enhance the benefits enrollment experience, removing confusion and offering guided decision support powered by data to help people make the right choice.

And unlike many organizations, we’re not leaving decision support to AI. 

Early testing, consumer research, and thorough assessment against our ethical AI framework made it clear that AI is not yet to be trusted with a life-impacting decision such as which health plan to choose. In benefits, doing this wrong can have massive repercussions. 

Instead, our decision support model uses advanced statistical techniques — like Monte Carlo simulation and Bayesian modeling — to generate bundled benefit recommendations. Rather than overwhelming users with questions upfront, the system starts with three transparent, data-driven plan bundles, balancing both typical and high-utilization scenarios. People can then dive deeper, adjusting inputs such as household income, health savings,spouse’s benefits, or anticipated healthcare needs, to refine the recommendations and fully understand the tradeoffs. This combination of rigor and flexibility gives users both clarity and control over their choices based on tried and true algorithms, not a generic set of data that includes thousands of poor decisions real people have made over time.

While AI-powered benefits administration has huge upsides, it’s not yet ready to take on every task for the risks mentioned above For that reason, companies who thoughtfully integrate AI into their products — and even remove it from some elements — are better suited to stand the test of time.

How to derisk AI integration

Today, there is no such thing as flawless AI, and we take that reality seriously. What matters is how organizations use AI, including the steps they take to test it before launch and continually improve it. Trust is not built on perfection, but on accountability and progress. 

When integrated with care, AI can and does strengthen benefits administration. That’s why we continue working to address challenges and make better experiences for everyone, while putting safeguards in place to monitor performance and promote equity:

  • Accuracy auditing: Keep a human in the loop with continuous auditing of your AI-driven outcomes, particularly when health coverage or financial impacts are at stake, as with dependent verification. 
  • Equity auditing: Test AI regularly across diverse groups—non-native English speakers, varying digital literacy levels, and employees with disabilities. Document gaps and set improvement targets.
  • Multi-modal support. AI isn’t a universal fix. Offer live chat, multilingual print, and in-person counseling, with easy escalation to human help.
  • Accessibility first. Work with vendors who meet WCAG 2.1 AA and ADA standards, building for accessibility from the start.
  • Continuous monitoring. Keep AI training aligned to evolving policies. Track satisfaction and completion rates to identify disparities.
  • Feedback loops. Provide clear ways to report issues or bypass AI. Use feedback to refine tools and flag systemic problems.

Organizational red lines: Clearly define your high-risk areas — such as benefits eligibility or claims approval — which should be off-limits to AI tools.

The bottom line: AI should help, not hurt

The goal of AI in benefits shouldn’t be automation for its own sake. Instead, it should be used to create more equitable, accessible, and effective benefits experiences for all. This requires acknowledging that AI systems, despite their sophistication, are tools that must be actively monitored and managed.

For HR and benefits leaders, this means starting every AI conversation with questions about fairness and accessibility. It means partnering with organizations who share these values and can demonstrate commitment to inclusive, human-centered design. 

Employers that get this right will find that thoughtful AI implementation doesn’t just reduce administrative burden; it builds trust, improves employee satisfaction, and creates more equitable workplaces where everyone can access the benefits they’ve earned.

To learn more about how PlanSource ensures responsible AI, read the whitepaper.

Recent Posts