What HR Leaders Should — and Shouldn’t — Trust AI to Do
Brian Smith, Vice President of Product
For HR teams, the appeal is clear: AI eliminates the burden of repetitive tasks, allows 24/7 support, and personalizes the benefits experience at scale. A 2024 Gartner survey found that within only six months, the number of HR teams conducting AI pilots or testing concepts has doubled.
But beneath this technological promise lies a complex reality. While AI can streamline benefits administration, it may also perpetuate poor decision-making or introduce new forms of bias that can undermine equity and accessibility. The same algorithms designed to democratize benefits information can inadvertently create barriers for employees most in need of support.
When AI systems fail to account for linguistic diversity, cultural differences, or digital literacy, the consequences extend beyond frustrated employees. This impacts employee welfare, puts regulatory compliance at risk, and undermines organizational trust.
The challenge isn’t whether to adopt AI, but how to implement it safely and equitably. Here are some important considerations as you bring AI into your HR practices.
AI should represent everyone, not just a few
If your AI assistant is helping employees to interpret policies, choose plans, or get guidance about sensitive topics, it may default to Western norms about family, work, privacy, or health. It’s important to institute checks to ensure what’s being shared is culturally relevant to each individual.
AI shouldn’t perpetuate stereotypes
Worse, some AI systems have been accused of outright discrimination. Even subtle biases can erode employee trust in AI-powered tools. When workers perceive that AI treats them differently based on their demographics, it can damage the employer-employee relationship and reduce participation in valuable benefits programs.
As benefits play a critical role in how people understand and access healthcare, HR leaders must thoroughly vet their AI tools and partnerships to ensure they’re not perpetuating these harmful stereotypes. It’s also important to remember that AI is not the answer to every problem. In some cases, a well-established deterministic model (like a rules-based system for eligibility checks) may actually be more reliable and low-risk than a newer, untested AI solution.
AI should be accessible by people of all abilities
New federal rules make this issue more urgent. As of April 2024, all workplace websites and apps must meet strict accessibility standards.
When employees with disabilities can’t access AI benefits tools, not only do companies put themselves at risk of compliance failures under the Americans with Disabilities Act. More importantly, these workers may miss out on benefits they need and deserve. Therefore, HR leaders must prioritize accessible solutions.
AI shouldn’t reinforce the digital divide
Either scenario illustrates how AI tools could undermine equitable access to benefits or degrade the employee experience. When introducing AI solutions, companies must make sure they’re just as easy to use for people who engage with AI on a daily basis and people who’ve never even heard of it. In good news, some fixes are simple — for example, instructing the system to answer at an eighth-grade reading level, or provide plain-language explanations for technical terms. These can go a long way toward making AI guidance more inclusive.
AI should be dependable
Incorrect AI guidance in benefits administration can lead to missed coverage opportunities, inappropriate plan selections, or enrollment in programs for which employees aren’t eligible. These errors often aren’t discovered until employees need to use their benefits, requiring urgent HR intervention and creating potential liability for the organization.
Which leads us to a final consideration HR leaders must reconcile: while powerful, AI is simply not appropriate for every single use case (at least yet).
AI in HR: What’s possible today, and where to caution
As we integrate AI within PlanSource, our team carefully assesses when and how we can provide the best, always-on, automated experiences in a way that doesn’t lead to the above risks. For example, one of our objectives is to continually enhance the benefits enrollment experience, removing confusion and offering guided decision support powered by data to help people make the right choice.
And unlike many organizations, we’re not leaving decision support to AI.
Early testing, consumer research, and thorough assessment against our ethical AI framework made it clear that AI is not yet to be trusted with a life-impacting decision such as which health plan to choose. In benefits, doing this wrong can have massive repercussions.
Instead, our decision support model uses advanced statistical techniques — like Monte Carlo simulation and Bayesian modeling — to generate bundled benefit recommendations. Rather than overwhelming users with questions upfront, the system starts with three transparent, data-driven plan bundles, balancing both typical and high-utilization scenarios. People can then dive deeper, adjusting inputs such as household income, health savings,spouse’s benefits, or anticipated healthcare needs, to refine the recommendations and fully understand the tradeoffs. This combination of rigor and flexibility gives users both clarity and control over their choices based on tried and true algorithms, not a generic set of data that includes thousands of poor decisions real people have made over time.
While AI-powered benefits administration has huge upsides, it’s not yet ready to take on every task for the risks mentioned above For that reason, companies who thoughtfully integrate AI into their products — and even remove it from some elements — are better suited to stand the test of time.
How to derisk AI integration
When integrated with care, AI can and does strengthen benefits administration. That’s why we continue working to address challenges and make better experiences for everyone, while putting safeguards in place to monitor performance and promote equity:
- Accuracy auditing: Keep a human in the loop with continuous auditing of your AI-driven outcomes, particularly when health coverage or financial impacts are at stake, as with dependent verification.
- Equity auditing: Test AI regularly across diverse groups—non-native English speakers, varying digital literacy levels, and employees with disabilities. Document gaps and set improvement targets.
- Multi-modal support. AI isn’t a universal fix. Offer live chat, multilingual print, and in-person counseling, with easy escalation to human help.
- Accessibility first. Work with vendors who meet WCAG 2.1 AA and ADA standards, building for accessibility from the start.
- Continuous monitoring. Keep AI training aligned to evolving policies. Track satisfaction and completion rates to identify disparities.
- Feedback loops. Provide clear ways to report issues or bypass AI. Use feedback to refine tools and flag systemic problems.
Organizational red lines: Clearly define your high-risk areas — such as benefits eligibility or claims approval — which should be off-limits to AI tools.
The bottom line: AI should help, not hurt
For HR and benefits leaders, this means starting every AI conversation with questions about fairness and accessibility. It means partnering with organizations who share these values and can demonstrate commitment to inclusive, human-centered design.
Employers that get this right will find that thoughtful AI implementation doesn’t just reduce administrative burden; it builds trust, improves employee satisfaction, and creates more equitable workplaces where everyone can access the benefits they’ve earned.
To learn more about how PlanSource ensures responsible AI, read the whitepaper.
How AI Simplifies Annual Enrollment and Reduces Stress for HR Leaders
How AI Simplifies Annual Enrollment and Reduces Stress for HR LeadersJennifer...
ACA Distribution Just Changed: What Employers Need to Know
ACA Distribution Just Changed: What Employers Need to Know Andreea Ciocirlan,...
Less Menial, More Meaningful: Lessons From HR Tech
Less Menial, More Meaningful: Lessons From HR TechAddie...




Recent Comments