You cannot escape the impact of AI at the moment. Whether you feel threatened by its potential to take or change your job, or you’re worried about your GPT conversations appearing online, AI is top of mind in our personal and professional lives.
And now, it’s firmly entered one of the most emotive conversations of all — how it impacts employee pay. AI is already making its mark in the world of compensation, supporting everything from benchmarking to salary recommendations. And 63% of workers expect AI’s role in pay to grow over the next five years — it’s not going away.
This article explores employees’ hopes and fears around the use of AI in determining their compensation, and why HR teams must build trust around how they use it. Armed with the latest data and practical guidance, you’ll have a clear plan to use AI to create decisions that are fair, defensible, and trustworthy.
How employees really feel about AI shaping their pay
Resume Now’s AI in Pay report digs deep into how employees believe AI is already influencing their compensation — and how much more influence they expect it to have soon. The numbers tell a clear story: workers are seeing AI enter pay decisions fast, and they expect that trend to accelerate. Here’s how employees see AI’s role changing over time.
(Source: Resume Now)
All this to say, employees are fully aware that AI is involved in their salaries, but how do they actually feel about this?
Overall, the majority of employees believe it’s a good thing. Specifically, 68% think pay decisions involving AI would be fairer, and the same percentage would also trust pay decisions involving AI more than without AI's involvement.
AI guilt weighs heavy
Yet, approximately one in three workers have strong concerns. This is unsurprising if we zoom out and consider the rise of “AI guilt”, namely, the fear workers have of being judged or discredited for using AI tools in their work. monday.com’s AI at Work report has explored this concept in detail, finding that guilt outranks other AI concerns like displacing others on the team. There are variations though — guilt is most pronounced in the UK, where it ranked the third highest concern of using AI at work, compared to the US, where it came last.
If employees feel conflicted about AI for their everyday tasks, it’s understandable they might worry about AI influencing something as personal and high-stakes as pay.
(Source: Resume Now)
AI governance is a top concern
Employees also have a very real and logical fear of things like algorithmic bias, data misuse, and decisions they can’t understand or challenge. And rightly so. There are countless examples of AI systems producing biased or inconsistent outcomes when left unchecked — from recruitment algorithms that favour certain demographics to performance tools that misinterpret context or nuance.
Nevertheless, when used responsibly, AI has the potential to cut through noise and human bias, and contribute to a compensation process that feels more consistent and equitable. As Figures’ CEO Virgile Raingeard puts it:
“All departments are drowning in information and data that needs to be understood, sorted, and used. In this environment, AI is a valuable ally.”
How to deliver pay transparency that matches employee expectations
For many organisations, pay transparency has long meant one thing: publishing salary ranges. And with the EU Pay Transparency Directive coming into force in June 2026, this will be non-negotiable. But in the context of AI-assisted pay, transparency needs to go further.
Employees don’t just want visibility of numbers. They want to understand the process behind them, especially when algorithms are involved. Workers expect clarity on questions such as:
- What data is the model trained on?
- Who checks for bias?
- How often is the system audited?
- If something feels wrong, can I challenge the result?
The appetite for this level of oversight is strong. 94% of employees want independent reviews of pay algorithms, according to Resume Now’s AI in Pay report. Broader workplace sentiment mirrors this: monday.com finds that trust in AI is increasingly tied to data privacy (the number-one concern at 40%) and accountability, especially as employees switch between multiple AI tools each day, sometimes without knowing which system is behind which output.
To meet these expectations, HR teams need to shift from simple pay transparency to process transparency, especially where AI plays a role. Here’s how.
Step 1. Audit your data sources
Before AI touches a single pay decision, HR needs a clear picture of the data feeding it. Employees know that algorithms reflect the information they’re trained on, and if that information is biased, incomplete, or outdated, the outputs will be too.
Here’s your data audit checklist:
☑ Make sure your market data is current and relevant.
You should be using up-to-date, verified benchmarks, not scraped job boards, or global averages that don’t match your job architecture.
☑ Keep your internal data clean and consistent
Performance ratings, job levels, compa-ratios, and historical decisions are well-maintained (not scattered across spreadsheets or interpreted differently by each team).
☑ Investigate the “hidden layers” behind your AI tools. Ask vendors:
- Which subprocessors access employee data?
- Which model powers the tool, and how often is it updated?
- What third-party datasets feed the recommendations?
- How is bias detected and corrected?
- Can they provide an audit log explaining each recommendation?
Step 2. Conduct regular fairness audits
Employees expect proof that AI-assisted decisions are fair. Fairness audits should happen at least once per comp cycle, and they don’t need to be complicated.
Here’s your fairness audit checklist:
☑ Test for demographic equity: Run structured checks for gender and ethnicity pay gaps (where permissible), and flag any unexplained differences after accounting for role, level and experience.
☑ Check for pay compression: Compare long-tenured employees with recent hires in the same role. Identify roles or levels where newer employees are paid the same or more.
☑ Validate performance-linked pay: Look at rating distributions by manager, ensure similar performers receive similar increases, and confirm AI recommendations align with your compensation philosophy.
☑ Review AI-generated outliers: Flag unusually high/low recommendations, teams with lots of overrides, or shifts from previous cycles. These may signal model drift or missing context.
Step 3. Keep humans firmly in the loop
AI can certainly streamline your compensation decisions, but it can’t understand context or nuance (at least not yet), which means it fails when it comes to understanding the realities of individual performance.
Even employees who are comfortable with AI’s involvement in their pay need to know that people, not algorithms, remain accountable for their pay. This means designing your process so AI supports your decisions, while humans validate and make the final call.
Here’s a simple checklist to keep human judgment at the centre of your compensation cycle:
☑ Require managers to review every AI recommendation: No salary change is approved automatically. Human validation is mandatory.
☑ Allow overrides, but keep them structured: Define when managers can adjust recommendations, what justification is needed, and who signs off.
☑ Hold a calibration-style review: Before finalising pay, HR and managers review outliers, compare decisions across teams, and challenge the model’s outputs.
Real-life example: When Figures CEO Virgile Raingeard was HR Director at Criteo, his team introduced an algorithm to support annual salary reviews. The model generated personalised salary increase suggestions based on each employee’s compa-ratio and performance review history, essentially a more structured, data-driven evolution of the traditional merit matrix.
But critically, the process never replaced human decision-making.
- Managers reviewed every recommendation.
- They could adjust the suggested increase where context required it.
- To maintain fairness, no more than 5% of increases were allowed to deviate from the model.
This balance of structure and human oversight had a clear impact on trust.
“This process was very well received by our teams, who saw it as a more reliable and less arbitrary way of making decisions.” — Virgile Raingeard
Step 4. Document your decision flow for transparency
Even the fairest AI-assisted pay process will feel opaque unless employees understand how decisions are made. Documenting your decision flow turns a “black box” process into something predictable, explainable, and trustworthy. Use the following decision flow checklist:
☑ Explain the tools involved: A simple overview of which systems generate benchmarks, suggestions, or data inputs (in plain language, not jargon).
☑ Clarify what’s automated vs. human-reviewed: Employees care about this distinction.
☑ Share your audit and review schedule: State how often you check for fairness (e.g., each comp cycle), and what the audit covers.
☑ Show where your data comes from: List your benchmark sources, internal data inputs, and any subprocessors handling employee information.
☑ Provide a clear process for questions or challenges: Let employees know exactly how to raise a concern about their pay and who will review it.
Step 5. Build employee trust
Trust is built by people rather than by tools. Even when AI makes your compensation process more consistent, employees need clarity, reassurance, and communication to believe in it.
☑ Explain what AI solves: Cover inconsistency, unconscious bias, outdated market data, and slow review cycles.
☑ Explain how your system works (in plain language): Given many employees underestimate their AI skills, outline what the model does, what it doesn’t do, and why it’s safe.
☑ Show your guardrails: Share your audit rhythm, fairness principles, human review points, and data protections. Transparency creates confidence.
☑ Communicate proactively during each compensation cycle: Use FAQs, manager talking points, and a clear breakdown of the review process to reduce confusion and anxiety.
☑ Train managers to tell the story well. Here’s an example you can borrow.
The future of AI in compensation
AI’s influence on compensation will only deepen from here, and with it, the level of scrutiny. Regulators are moving toward tighter standards around automated decision-making, audits, and explainability. Employees, too, are becoming more informed and confident in challenging how pay decisions are made. They’ll expect clear answers about data sources, bias checks, and who is ultimately accountable.
This is why the technology you use really matters. If AI is involved in your pay process, you need a platform that can stand up to those questions, one with transparent data and governance you can evidence.
That’s exactly what Figures is built for: compensation data you can trust, built on a privacy-first infrastructure.
If you want to bring AI into your pay process, responsibly, book a Figures demo and see how it works in practice.
Summarize this article with AI
No time to read it all? Get a clear, structured, and actionable summary in one click.


.jpg)

.avif)

