Written by

Simon Stoker

Rory Sutherland has a great point about how we solve problems.
He points out that the London Underground didn't just try to make trains faster; they added dot-matrix boards so waiting felt better. Uber didn't make taxis arrive faster; they showed you a car moving on a map so the wait felt more predictable.
His argument is that we over-index on "engineering" solutions because they’re easier to justify, even when they aren't the most effective way to solve the human problem.
In talent acquisition, I think we do the same thing.
Most TA improvement initiatives focus on mechanistic solutions like optimising our ATS, reducing time-to-fill, and adding interview stages to 'de-risk' hiring. None of these are bad, but you can defend them. You can put them on a slide and show a business leader that you are making progress.
The psychological questions are harder and we tend to avoid them.
Consider the impact of perception on waiting times: a candidate might endure a three-week wait for a desired role if they feel valued, yet withdraw in days if treated impersonally.
What if we measure what makes us look good rather than what makes us better? We choose metrics we can defend in business reviews: time-to-fill, cost-per-hire, application levels; because they show we're busy. The metrics that would actually tell us if we're improving are harder to track and uncomfortable if they look bad. So we have optimised for psychological safety, not for learning.
What if hiring for 'culture fit' is really just our bias for cognitive ease? We say we want someone who "gets it" or "fits the team." What we actually mean is: this person won't make us uncomfortable. Hiring someone who challenges our thinking takes more effort than hiring someone who mirrors it. So we filter for comfort and call it "cultural alignment."
These questions don't have benchmarks. There's no best practice guide for productive discomfort. While data and process are crucial, I think some of the hardest work in TA leadership is creating space to explore interventions that don't have clear models yet. The kind that make business leaders uncomfortable because you can't point to a case study or a benchmark.
The teams willing to experiment with psychological interventions alongside mechanistic ones will have more range. They'll see some of the problems others miss. And they'll adapt faster when something stops working.
