How ethics shapes AI assessment
Based on an interview with Dr. Luke Treglown, Director of AI & Assessment R&D at Deeper Signals, former Director of AI & Head of Innovation Labs at Thomas International, and PhD in the Psychology of Employee Disenchantment.
AI is changing how we understand people at work, but ethics, consent, and human judgment still set the standard. This piece explores four questions leaders and practitioners ask most: Can AI spot faking? Should assessments be invisible? Where does AI coaching help and where does it stop? What’s the biggest misconception about AI in talent decisions?
Can AI tell authenticity from “faking” in assessments?
Short answer: not on its own. It needs psychological guidance and a lot of high-quality data.
“The short answer is by itself no unless we help it understand… Can AI spot who we truly are vs. how we act? Probably not accurately without clear guidance on what our true self is.”
Luke differentiates two kinds of “acting” we see in assessments:
- Impression management (playing a role you think the job wants)
- Self-deception (presenting a slightly more extreme version of your real strengths)
“Psychologists differentiate between impression management versus self-delusion when people present a slightly more nuanced version of themselves.”
Practically, this means AI shouldn’t be positioned as a lie detector. Used well, it can flag patterns that suggest coached responses or persona-playing, but human expertise interprets intent, context, and fit.
Should assessments become “invisible”?
Some imagine a future where AI quietly observes how we work, drawing conclusions about our personality in the background.
“Yes, AI will be able to make some assessments invisible, but visibility isn’t the problem here, consent is… The goal isn’t to measure people secretly but to empower them openly.”
Even if background data reveals behavior, it doesn’t equal personality:
“It’ll tell us the what of what is happening rather than the why.”
Ethically, invisible assessment skews the power imbalance, especially in hiring.
“There’s already a power imbalance between the organization and the candidate… people should feel in control of their own data and their own story.”
Bottom line: Design for transparency, consent, and dignity. Use behavioral signals to support people, not to surveil them.
Where does AI coaching help and where does it stop?
Coaching is one of the most personal forms of development, and while AI can enhance it, it will never replace the human connection that makes it meaningful.
“AI is not going to replace coaches, nor should it,” Luke explains.
He compares the difference to Netflix and cinema: both serve a purpose, but in very different ways.
“It’s like Netflix and cinema… AI democratizes development… but people still want the cinema experience of human connection.”
The best approach is a hybrid model, where each complements the other. Human coaches bring empathy, deep exploration, and transformation - the kind of understanding that comes from dialogue and trust. AI, meanwhile, adds consistency and accessibility. It can act as a nudge engine, helping people stay accountable, track progress, and maintain momentum between sessions.
“AI keeps that alive,” Luke notes. “Check-ins, accountability, reinforcing goals — it makes coaching available to more people in an affordable and accessible way.”
Together, human insight and AI support make development more continuous, personal, and inclusive.
Should AI tell someone they’re “not a fit”?
When it comes to decision-making, automation should never replace empathy. Rejecting a candidate outright based on an algorithm is risky and unfair.
“Should AI ever make a decision that someone’s not a good fit? No, probably not,” Luke says. “It should enhance human judgment, not replace it.”
AI can, however, support reflection. When guided by the right data and human oversight, it can help explain why a mismatch exists, turning what might have felt like rejection into an opportunity for insight.
“Should AI ever tell someone they're not a good fit? Possibly, but only in the way a good coach would. So it’s not about rejection, it’s about reflecting on why someone's not a good fit. It’s about clear understanding what success in a role looks like.”
In this way, technology becomes a tool for self-awareness, not exclusion. Instead of closing doors, it helps people understand where they thrive, reframing “not a fit” as a moment of clarity and direction, not failure.
What’s the biggest misconception about AI in talent assessment?
One of the most common misconceptions is that AI is a magic solution, a tool that can instantly solve complex human challenges. In reality, it’s not a crystal ball.
“The biggest misconception is that AI is a silver bullet,” Luke explains. “I think AI is more of a mirror, it reflects back what we put into it.”
If the data we feed it is biased, incomplete, or unclear, AI will simply amplify those flaws.
“We need to define what good looks like. AI is the tool, not the solution.”
In other words, the quality of AI’s output depends entirely on the quality of human thinking behind it.
“Treat AI like a genie,” Luke advises. “Be precise about the question and think how it could go wrong.”
Be intentional. Define your goals, validate your data, and always keep humans in the loop. AI can help us move faster and see patterns more clearly, but wisdom still comes from people.
From one “perfect profile” to many ways of winning
For too long, hiring decisions have been shaped by narrow ideas of success. There is a belief that there’s one “ideal” type of performer for every role. Think of assumptions like “great salespeople are all extroverts.” In reality, there are many ways to be great.
“A large sales organization believed this,” Luke recalls. “But when we looked at the data, we found that introverted, detail-oriented people also performed really well, they just got there with different means.”
AI has the power to make this diversity visible. By combining data about individual traits with insight into a team’s culture and goals, it can reveal multiple paths to high performance — not just the loudest or most obvious ones
“AI can draw the connections and give clear guidance, like here are the three or four steps you can use to bring these two bits of information together to make it meaningful.”
Instead of forcing people to fit a mold, this approach helps organizations understand how different strengths succeed in different ways and how to support each person in achieving their version of excellence.
Why human judgment still matters in an AI-driven world
As AI continues to transform how we assess and develop people, one principle remains constant: human judgment is irreplaceable. Technology can scale access, improve consistency, and surface insight — but only psychology brings fairness, context, and meaning. The future of assessment depends on both: AI to illuminate patterns, and people to interpret them with empathy and integrity. When humans stay in control of their own stories, AI becomes not a replacement for judgment, but a reflection of our best intentions.
“As AI gets better at observing people, we need to design for transparency, trust, and dignity. People should feel in control of their own data and their own story.”
That’s the future of assessment worth building.








