The Impact of AI on Compensation: Should We Be Concerned?
Share
The Impact of AI on Compensation: Should We Be Concerned?
After ChatGPT launched in November 2022, Google searches for ‘artificial intelligence’ went up by 300%. And just five months later, a third of organisations surveyed by McKinsey said that they were regularly using generative AI in at least one business function.
2023 will go down in history as the year that generative AI went mainstream. In the past year, we’ve seen it used in everything from healthcare to customer service to space exploration.
But while that’s all fascinating, there’s one thing we’re particularly interested in at Figures: compensation.
In this article, we’ll dive into the different ways that AI can be used to enhance and improve compensation practices — including how we use it in our own product. We’ll also chat about the risks and challenges involved, and share some advice for fair and responsible AI use.
The case for using AI in compensation
Only 40% of employees believe their pay is fair, according to a recent Gartner report. And just 40% of compensation leaders have confidence in managers’ ability to make pay decisions.
The fact is, the quality of a pay decision is often dependent on the quality of the manager who’s making it. And if employees don’t trust those managers, they’re not going to trust the decisions either.
In some cases, managers’ personal feelings and biases about certain employees can cloud the issue even further.
Plus, making these decisions takes up a huge amount of managers’ time. Handing over (some of) this process to an AI model gives them more time to focus on coaching and developing employees.
3 ways AI is already impacting pay decisions
Here are three different ways that organisations are already putting AI to use in their compensation systems.
1. Generating reliable performance ratings
In many organisations, pay increases are strongly tied to performance. But 64% of employees feel that the performance review process is a waste of time.
The thing is, if you’re going to use performance data to make decisions about pay, you really need to know that data is accurate and trustworthy — and AI can help.
Here’s just one example: Peerful is a performance assessment and calibration tool that uses a machine learning algorithm to incentivise honest employee feedback.
Among other things, it does this by looking for relationships between assessors' own scores and the scores they give others. It also takes into account other data about them and their relationships with their colleagues.
The result is reliable, accurate performance data — which can be used to make important decisions once compensation review season rolls around.
2. Enhancing benchmarking possibilities
Evaluating your salaries against industry standards allows you to build a fair and competitive compensation strategy. That’s why we created our salary benchmarking tool, which draws insights from an extensive dataset from over 1200 European companies.
But highly specialised jobs and new markets are emerging every day. That means there are moments when we don’t have the robust market data we need to provide an accurate benchmark.
To solve this problem, we built FiguresAI: an AI-enabled tool that provides meaningful salary insights across Europe and beyond. FiguresAI uses AI predictions to help HR and compensation leaders make strong, data-driven decisions about pay — even when that data is hard to find with traditional tools.
3. Making recommendations on individual pay decisions
The big question is, can AI be used to make decisions about individual employees’ compensation? The short answer is yes — but it should be handled carefully.
Here’s an example: according to an SHRM article from 2019, IBM has already been using AI in its compensation systems for several years.
Managers use an in-house machine learning algorithm that provides recommendations for salary increases, from low to high to no increase at all.
Naturally, some people feel a bit queasy about the idea of an algorithm determining their pay — which we’ll talk about a bit more below. But it’s important to point out that the final decision at IBM always comes down to the manager’s discretion.
That said, less than 5% of managers have disagreed with the AI — and managers who follow its advice have reportedly cut attrition by 50%.
So, what’s the catch?
Like any new technology, AI comes with certain risks and challenges. And, when something as important as employee compensation is at stake, organisations that choose to adopt these tools need to be aware of them.
Negative reactions from employees and managers
First of all, some people just don’t like the idea of an AI getting involved with pay decisions. In the Gartner survey we talked about above, 60% of total rewards leaders said the fear of negative reactions from employees and managers was one of their biggest barriers to automating pay decisions.
To figure out if this is really an issue, Gartner conducted an experiment that asked employees to respond to pay decisions made by either a manager or an algorithm. While the respondents did rank the managers’ decisions as slightly more fair, the outcome of the decision had a much bigger impact.
Put another way, it’s not who makes pay decisions that really matters to employees, but the result of those decisions — i.e. whether or not the employee gets more money. Logically, that means that as long as AI is being used responsibly, there’s no reason to think your employees will react negatively to its decisions.
Possibility for bias
An AI model is only as good as the data it’s trained on. And when biases exist in that data, AI can perpetuate them.
Here’s an example. Way back in 2014, Amazon started building an AI tool to streamline the company’s recruitment process. Its job was simple: review incoming resumes and identify the most promising ones for human review.
The problem was, the AI was trained on patterns in resumes that had been submitted to the company over a ten-year period. Based on this data, it soon learned who Amazon considered to be the strongest candidates: men.
While the AI didn’t outright reject applications from female candidates, it did penalise CVs which specifically mentioned ‘women’. That meant candidates who went to all women’s colleges or included women’s sports teams in their extracurriculars went straight on the ‘no’ pile.
The point is, if there’s bias in the data you’re using to train an AI tool, that’s going to come out of the other end — even if it's unintentional. Those developing AI tools for use in compensation need to be hyper-aware of this possibility — and include measures to address it.
Compliance concerns
The EU pay transparency directive will go into effect in its 27 member states over the next few years. And, among other things, the directive will put more scrutiny on organisations’ pay decisions and how they’re made. Unlike now, the burden of proof in pay discrimination cases will be 100% on the employer, not the employee.
That means that if an AI model makes biased or otherwise unfair pay decisions, companies could face reputational damage at best, and fines or legal action at worst.
So, what’s the solution? Even if companies use AI to help them make compensation decisions, they shouldn’t rely blindly on what it tells them. There should always be human checks and balances in place to ensure decisions comply with regulations.
Lack of nuance
According to Gartner, 68% of total rewards leaders are worried about AI’s ability to capture employees’ unique contributions to the business.
The problem is, AI models are very good at understanding and analysing things like performance ratings, skills and experience. But they may not be able to capture some of the more nuanced factors that go into pay decisions. Things like employee potential, culture fit and subjective evaluations, for example.
One way to address this limitation is by creating a measure that captures these more subtle elements, and having managers manually input it into the AI. This could be as simple as rating each employee as high, medium or low, or ranking them on a 5-point scale.
An AI model can then combine this data with other factors like internal and external benchmarking, performance data and more to make its recommendations.
How to use AI fairly and responsibly
AI is a very new technology — especially in its current form. Employers should develop and adopt organisational strategies that support the responsible use of AI, before diving head-first into AI-enhanced compensation.
If you’re not sure where to start, IBM’s five pillars of AI ethics are a good baseline:
- Explainability: The AI system should be able to explain and contextualise how it came to a decision.
- Fairness: The AI system should help humans make fairer choices by countering human bias and promoting inclusivity.
- Robustness: The AI system should be actively protected from attacks and attempts to compromise system security.
- Transparency: Users of the AI system should be able to see and understand how it works and understand its strengths and limitations.
- Privacy: The AI system should prioritise and safeguard users’ privacy and personal data.
So, can AI tools replace humans in compensation systems?
In a word, no. In the future, we might see AI tools that can fully capture the complex nuances that go into every compensation decision — but we’re not there yet.
Hiring managers, compensation leaders and decision-makers should use AI tools with caution — and always alongside their own expertise and intuition.
Learn more
Want to learn more about AI and compensation? We recently hosted a webinar on this very topic, where our CEO and founder Virgile Raingeard got down into the details with Brainfood’s Hung Lee and Senior Compensation and Benefits Leader Arif Ender.
Watch the replay here!