AI in recruitment: when fairness requires technology – and human judgment

AI promises greater objectivity in recruitment, but it can also reinforce hidden biases. Two experts explain how to combine technology, ethics, and human judgment in the pursuit of fairer decisions.
By Sofie Brogaard Schmidt, Editor, DANSK HR


Helene Hoppe Revald, Head of Psychometrics at Assessio and a trained psychologist, leads the development and validation of assessment tools across the Nordic region.
Louise Monnerup, Solution Architect and authorised psychologist at Assessio, serves as the link between the psychometric work and the organisations that use the tools in practice.
AI is playing an increasingly prominent role in recruitment processes, and for many organisations the technology raises both hope and concern. On the one hand, there are expectations of more structure and fewer gut feelings. On the other, there is fear of new forms of bias and decisions that cannot be explained.
This balance is precisely what Helene Hoppe Revald and Louise Monnerup work with every day. Helene is Head of Psychometrics at Assessio and a trained psychologist, leading the development and validation of assessment tools across the Nordic region. Louise is a Solution Architect and authorised psychologist at Assessio, serving as the link between the psychometric work and the organisations that use the tools in practice.
Bias does not disappear – it changes form
When AI becomes part of the recruitment process, the question quickly arises: can technology help us avoid the biases that humans typically bring into evaluations? Both Helene Hoppe Revald and Louise Monnerup point out that bias does not disappear – it simply takes other forms. They explain that our human biases are often invisible to ourselves. We tend to prefer certain types of people, read meaning into a glance, a hobby, or a career path – all without realising it. But AI learns from historical data, and if that data carries traces of past preferences, the imbalance is reproduced with greater impact. This means AI can reinforce the very patterns organisations are trying to eliminate.
An important difference is that human bias varies. We may be influenced by our mood, the context, our energy level, or subtle chemistry. Technological bias, on the other hand, is more stable. “AI is less noisy than humans. But once it has learned a bias, it repeats the same assessment over and over again,” says Louise. This makes the consequences greater and more systematic.
Helene highlights another challenge: AI’s ability to identify patterns that may seem logical based on data, but that do not necessarily have any real relevance to job performance. “Technology can find patterns that we as humans would never even think about—and therefore might not detect,” she says. This underscores the need for critical professional expertise, especially as organisations begin to outsource parts of the assessment process to algorithms.
The Research Behind Responsible AI in HR – Download the Report
Three places where bias arises
In their work, they see that bias typically enters at three points:
- Data: is the information relevant and valid—and does it truly reflect what you want to predict?
- Model: how is the algorithm constructed and weighted?
- Interaction: how do people’s questions, assumptions, and prompts influence the outcome?
“Understanding these three levels is crucial if you want to use AI as a serious decision-support tool,” says Helene. She emphasises that humans can still significantly influence the system, even unintentionally. This can range from imprecise questions to unconscious assumptions about what a ‘good candidate’ looks like.
Louise also points out that some organisations underestimate the importance of data validity. “If you feed an AI with data that has no predictive power in relation to job performance, the output will naturally be misleading. It’s not the technology that fails – it’s our assumptions,” she says.
Structure as a counterweight to gut feelings
Although AI can carry biases, the technology can also help create more consistent and structured evaluations.
When large amounts of information need to be connected – test results, case exercises, interview notes, and job requirements – AI can help maintain focus and ensure that all candidates are assessed on the same parameters. According to Louise, this is where AI can make a clear difference.
“Research shows that structure is one of the most effective ways to minimise bias,” says Helene. “AI can help maintain that structure, so assessments don’t drift as complexity increases.”
Helene adds that AI can function as an additional perspective that challenges our own assumptions. Not as the final judge, but as support that can indicate when something should be reconsidered. “It can remind us that we may be heading toward a decision based more on chemistry than on competencies,” she says. However, this structure also depends on organisations maintaining a well-defined recruitment process.
If the process is already disorganised or based on gut feeling, AI will simply become another element to navigate. Technology does not solve problems – it exposes them.
The candidate experience: fairness or distance?
While AI can lead to more structure, it also creates new dilemmas in the interaction between candidate and organisation. Both Helene and Louise emphasise that candidates respond differently to the technology. Some experience AI-based processes as more fair, because the assessment focuses more on content than on appearance. Others miss the human interaction and become sceptical of systems they cannot see into.
“There is an ethical responsibility to be transparent. If you cannot explain why you use AI and what happens to the data, it can create uncertainty among candidates,” says Louise. Ethics is therefore not only about data foundations and algorithms, but also about communication and the candidate experience.
In addition, a fundamental view of human nature is at stake. Candidates largely assess organisations based on how seen and respected they feel. If AI is perceived as a barrier, it can harm both employer branding and talent attraction.
From black box to transparency
Many leaders experience AI as opaque. Here, Helene recommends asking the same critical questions you would ask about any other recruitment method:
- What knowledge is it based on?
- What data is included?
- What limitations does the model have?
- How is fairness ensured?
“You need to be able to explain what the model bases its assessments on – otherwise you cannot stand behind it yourself,” she says.
Louise elaborates that the workings of publicly available Large Language Models are often difficult to fully understand, whereas more narrowly defined AI solutions typically provide greater insight into the underlying data. This makes it easier to assess whether the technology fits the purpose it is meant to support.
Ready to Combine AI with Human Judgment?
Competencies are changing, but the foundation remains
Although AI requires new considerations, it does not mean that HR professionals and leaders must start from scratch. Both point out that many of the most important competencies are already in place.
The ability to assess data quality, understand what predicts job performance, and maintain structured processes remains crucial. “We should not leave the assessment task to technology alone. We must use our expertise to ensure that technology is used correctly,” says Louise.
Helene adds that awareness of our own biases becomes even more important, because humans still influence the system through the questions and instructions they provide. “We are still part of the equation. That responsibility cannot be outsourced.”
The strategic decision: where does AI make sense?
If AI is to be used responsibly, it must be a conscious choice – not a quick way to save time. Organisations should consider where the technology truly creates value and where human contact is indispensable.
“AI should not be used for its own sake,” says Louise. “It should be used where it makes the process more consistent and fair – and not in situations where a human relationship is what matters most.”
Helene concludes with a point about transparency: “If we cannot explain to candidates why we use AI and how it strengthens the process, that may be a sign that we have lost sight of the human element. Fairness is about both data and dialogue.”
As they both emphasise, AI is only a tool – not a replacement for judgment. Even though the technology is developing rapidly, it is still people who set the direction. Fairness arises only when we dare to combine data, judgment, and ethical awareness.
👉 See How Assessio Uses AI Responsibly – Book a Demo



