Compversation #6 - Is AI on Our Side?

April 15, 2025
.
4
min read

Share

Table of contents

How can we make decisions that are fair, equitable and effective… when the concept of equity itself is difficult to define? 

In the pay transparency era, employees and candidates are less and less likely to settle for vague explanations and arbitrary decisions borne of confidential negotiations. For society as a whole, this is good news. But as compensation & benefits professionals, we know this shift will lead to one thing: more complex HR processes. 

At each stage of the employee lifecycle, a myriad of criteria come into play. Some relate to the state of the market, others to the individual situation of the employee. Both are becoming increasingly difficult to evaluate fairly, and even some of the most standard criteria are now being questioned. For example, is it equitable to take into account the school or university where an employee received their degree — especially when they’re multiple years into a role? Believe it or not, I’ve known numerous companies use employee diplomas as a factor in pay increase discussions for people with five plus years of seniority!

But according to a recent judgment by the French Court of Appeal, this may no longer be acceptable. In fact, a growing number of young businesses are turning away from this criteria — which, after all, doesn’t provide any real information on the value an employee brings to an organisation (compared to their actual performance). In short, starting offers and salary reassessments are becoming increasingly complex. 

We need to find ways of generating offers and conducting salary reviews that are competitive, consistent with market conditions and internal compensation policies, and above all... compliant with a rapidly tightening legislative framework. Mission impossible? 

AI as a decision-making tool 

This ‘complexification’ problem isn’t unique to comp & ben — or even HR. All departments are drowning in information and data that needs to be understood, sorted, and used. In this environment, AI is a valuable ally. Algorithms are very effective at ‘digesting’ all of this data and proposing solutions adapted to business needs. And the use of AI isn’t limited to tech functions. According to a 2023 McKinsey survey, it's becoming more and more widespread among executives, many of whom are now using AI tools in their day-to-day work.

To me, it seems obvious that AI algorithms will play an increasingly important role in the compensation and benefits sector throughout the employee lifecycle. If we feed these tools relevant data on the state and trends of the market and our internal compensation policies, algorithms should be able to suggest fair starting offers — perhaps even more effectively than humans. 

They won’t be influenced by subjective factors like a candidate’s negotiation skills. Similarly, the thorny topic of effective performance evaluations could also be transformed by the use of algorithms. 

And, if we can use AI tools to conduct fairer evaluations? The idea of using them to suggest annual salary increases will become more accepted and less controversial, making our role as comp & ben professionals easier. Finally, at the company level, algorithms can provide a bird’s-eye view that allows us to budget salary expenses in advance. They could also help us to conduct internal audits and detect potential inequalities such as gender pay gaps — an interesting perspective that we’re exploring with interest at Figures (more on this this summer!). 

A difficult transition 

There are many advantages to using AI in compensation & benefits. But the transition will also be trickier than in other areas, like marketing or product development. For one thing, that’s because the data we use is particularly sensitive and must be protected. But another issue is that we assume employees will have difficulty accepting compensation decisions made by an algorithm.

According to Gartner, 60% of compensation and benefits professionals cite this fear as one of the biggest barriers preventing them from automating pay decisions. But in the same study, Gartner finds that employees have the same amount of confidence in algorithms as they do their managers when it comes to making fair decisions about pay. 

In fact, the origin of the decisions (i.e. manager or AI) has only a tiny impact on employees’ perceptions compared to the decision itself (it will be perceived positively if it’s positive, and negatively if it’s negative). As HR Director at Criteo, I saw for myself how this myth is often unfounded. We put in place an algorithm that suggested personalised salary increases based on each employee’s compa ratio and performance review history (an evolution of the classic merit matrix). 

We allowed managers to edit some of those suggestions, but aimed to have a maximum of 5% of increases deviate from this algorithm-based decision. This process was very well received by our teams, who saw it as a more reliable and less arbitrary way of making decisions. (It should be noted that I was HR Director for the technical teams, who tend to be more inherently tech-savvy).

I was able to see the impact of this approach in terms of time saved and peace of mind for employees, managers, and the HR team alike. From an HR perspective, AI allowed us to get back precious time in a process that’s often long and painful, especially when handling large volumes of data (when establishing or reviewing salary ranges, for example). 

And managers, who are increasingly overwhelmed, see compensation decisions as an extra burden (I’ll talk about that more in a future issue). If it gives them access to fair, effective compensation suggestions, many will rely on algorithms (as demonstrated by an experiment conducted at IBM). The idea isn’t to give algorithms the final say but to make them one more tool in the decision-maker’s belt. 

‘It wasn’t me, it was the AI’? 

None of this means we should rush to adopt AI tools without taking the necessary precautions. There is, of course, the issue of data confidentiality and protection, and the need for secure algorithms. 

But we also can’t ignore the legislative and societal context. When it comes to pay equity, the burden of proof now lies with the employer, and all compensation decisions must be justifiable. The AI models we use should mitigate risks for businesses, not expose them to new dangers. Payroll decisions cannot be justified by simply saying ‘The computer said so’. 

Quite the opposite, in fact: AI algorithms can’t be black boxes. Comp & ben teams must work to make their algorithms explainable. In other words, we must always be able to find out what criteria went into a particular decision. Data quality is also crucial. Data that reflects existing pay gaps and bias could generate even more inequality. Remember that AI — even generative AI — can only make suggestions based on the information it’s been given. 

Companies that want to use AI tools must first embark on a process of cleaning up their databases to ensure they have quality HR data to work with. This is the key prerequisite to the use of algorithms: AI is only the final step in a long process of people analytics. But despite all of these obstacles and precautions, I believe AI represents a remarkable opportunity for our profession. As soon as algorithms are capable of limiting legal risks while saving time and increasing efficiency, they will be widely adopted. 

Are there any early adopters of algorithms among the readers of this newsletter? I’d love to hear your feedback! 

Let’s keep the conversation going 

Here’s a selection of content to give you food for thought. Feel free to send me articles that you’ve found interesting on this subject! 

AI and pay equity: Positives and pitfalls for employers to consider — Keith A. Markel, Jessica L. Lipson and Alana Mildner Smolow, Reuters

An interesting, relevant and comprehensive piece on the use of AI in the age of pay transparency, this article presents the risks and opportunities of these new tools in terms of legal and data protection. 

Bringing AI in pay decisions — Joanne Sammer SHRM

An article examining IBM’s use of HR to inform HR decisions: in short, algorithms make suggestions for compensation or raises, but the final say rests with managers. However, only 5% of managers deviate from the AI’s suggestions. 

Join the Compversation

Subscribe to the most read bi-monthly newsletter by the French Comp & Ben

Work email
Thank you! Our team will get back to you shortly!
Oops! Something went wrong while submitting the form.

Build a fair compensation strategy with our all-in-one compensation platform

Get started
Error text
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related posts

Illustration Blogpost
CEO's insights

Compversation #6 - Is AI on Our Side?

How can we make decisions that are fair, equitable and effective… when the concept of equity itself is difficult to define? 
Read more
Illustration Blogpost
Compensation

What Is Compensation Management Software (And Why You Need It)

Keep reading to explore the benefits of compensation management software, key features to look for, and why it’s a necessity in 2025. 
Read more
Illustration Blogpost
Pay Equity

6 Ways to Fix the Gender Pay Gap (That Have Nothing to Do with Pay)

In this article, we’ll share some concrete actions employers can take to break down the systemic, cultural and practical barriers that keep women earning less than men. 
Read more