Lucia Pagliarone found her annotated CV lying on a desk and it would provide compelling evidence when she took her boss to a tribunal on the grounds of sex discrimination. She won the case but it’s a stark reminder of how hiring decisions can be influenced by factors beyond skills and aptitude with the fall out having serious repercussions.
Being swayed by a candidate’s appearance is nothing new; but in a post-Weinstein, ‘#MeToo’ climate of heightened sensitives around discrimination, tackling bias in recruitment has become big business and increasingly robots are being called upon to restore some objectivity.
Fuelled by the new breed of artificial intelligence (AI) -powered applications, technology that can bypass physical attributes and analyse candidate data at speed without emotion or prejudice is gaining traction. Of the 1,200 hiring professionals surveyed by recruitment firm Korn Ferry, almost two thirds say AI has changed both the way the process is carried out and believe the technology attracts higher calibre candidates.
There is an increasing demand to remove the inherent human bias from recruitment decisions Respondents to LinkedIn’s 2018 Global Recruiting Trends report, cite time saving and the removal of prejudice be it around age, race, religion or gender as principal benefits.
“It’s not surprising that algorithms are becoming very attractive to eradicate the risk of bias and take the decision out of the hands of an interviewer,” says Emma O’Leary, a consultant with Manchester-based employment law firm, Elas.
“Human bias is often sub conscious but sub conscious discrimination is still discrimination. In an ideal world managers should have robust equality and diversity training to overcome sexist or racist views but clearly such bias is still prevalent as the example concerning Lucia Pagliarone highlights.”
If notions of being grilled by R2D2 at a desk come to mind the reality is a bit different. The most common iterations are automation tools typically deployed to filter out unconscious bias in the early stages of the hiring process.
By anonymising a candidate’s gender, social and educational characteristics they help to create a level playing field while predictive analytics can assess cultural or technical fit against a specified set of criteria and anticipate the likelihood of their success in the role, meaning greater efficiency and productivity.
As well as helping to weed out, AI can profitably target those perhaps normally deterred from applying. For example, concerned by the lack of women responding to its data scientist vacancies, UK cyber security company Panaseer turned to the predictive algorithms of Textio, a platform using machine learning to identify gender biased language in job descriptions and suggest linguistic tweaks.
“It highlighted how some of the wording in our job posts such as ‘ambitious’, ‘tackle’ and ‘driven’ was typically associated with masculine traits which was actually creating a subconscious bias,” explains the company’s chief scientist Mike MacIntyre.
“Alternatives were recommended to make the descriptions more inclusive and appealing to women such as ‘meaningful’, ‘collaborative’, ‘supportive’ and ‘contribute’.”
This simple amendment has made a big impact with Panaseer confirming a 60% uplift in female candidates and even an all-female shortlist for one of its most recent positions.
“We want to rely on an on-line platform to assess skills and drum out any bias from the outset but how someone comes across in person is very important; we’re a small company and we still want to judge rapport with a person in a face to face interview.”
Indeed, using AI to do the legwork before reverting to the human touch in the final stages remains a default approach for those who still feel there’s a role for emotional intelligence in the hiring process. However for Gareth Jones, Chief Operating Officer of recruitment specialist Headstart, it’s a compromise that means companies are ultimately falling at the final hurdle.
“Unfortunately, humans are inherently biased. So, no matter how much technology you build into the hiring funnel, if, at some point you have a human interaction face to face, the danger of bias creeps in.”
“As humans, we are, at the moment, pretty awful at preventing our decisions from being influenced by someone’s colour, their age, their appearance, their accent, even their name.”
If the answer then is to remove human intervention completely, it seems most UK businesses are not ready for that leap of faith. While almost two-thirds of respondents surveyed by CRM developer Pegasystems expect the use of AI to conduct interviews and shortlist candidates to be standard practice in the next decade, only 30pc believe an algorithm will making the final hiring decisions. The pervasive view remains that, ultimately, a machine cannot replace human judgement on a person’s soft skills and cultural fit.
However, for anyone whose has been asked by a living breathing recruiter ‘where do you see yourself in five- years-time?’ would an entirely automated exchange with a robot be any more formulaic?
Philip Say, vice president of Innovation Product Management for technology process transformation company Sutherland Global says not. Tasked with making the automated conversations behind the company’s bot-led interview system, TASHA, seem authentic and engaging, he argues that far from being a poor substitute for human interaction, candidates actually prefer to converse via a messaging-based chatbot. Perhaps not surprisingly though, it’s an approach that resonates most with millennials.
“Generally-speaking, that segment is looking for a differentiator in the candidate experience, “he says.
“Also, most have only known messaging as a means to communicate and like the short snappy exchanges which builds into more of a dialogue. The fact it remains completely neutral with gender, age and race unidentified keeps things focused on what matters.”
It's a bold statement because whether an algorithm can remove bias completely remains a moot point. For every proponent evangelical about an algorithm’s veracity there is a sceptic who says the case is overstated. Isn’t a robot pumped with racist data only going to display the same characteristics as its biased developer?
“Yes, algorithms are informed by humans, so it does rely on those designers and developers behind the bot to ensure that they are mindful of the ethics and recruiting compliance rules,” agrees Say.
“A big focus for us is avoiding cultural bias so before we design a chatbot conversation we leave our North American bubble and travel to some very remote place to hear the experiences of a diverse range of people which will then inform the language and content used (by the chatbot) with regional variations also taken into account.”
Dr Boris Altemeyer, Chief Scientific Officer at Bath-based AI start-up Cognisess is another staunch defender.
“The machine can detect micro expressions,” explains Altemeyer. “These emotions show on the face for only a fraction of a second – it’s so quick it can’t be detected by the naked eye, nor can it faked.”
“Technically this system can recruit in its entirety, but we would never advocate removing people completely from the process. If you think about humans reviewing 60 or more video interviews a day and still being absolutely unbiased or as sharp as when they watched the first one that would be a tall order for anyone, so it’s about getting as much of the purist data to them, so they make the best decisions.”
For the time being at least, the human recruiter is still hired.