For Recruiting, Is There Life After Automation?

Machine learning, artificial intelligence, robotics, automation — they’ve been the talk of the recruiting town for months. But what will they ultimately mean for recruiters, recruiting, and the whole hiring process?

Jason Roberts, from Randstad Sourceright, and I talk about it all in the podcast below, including:

Article Continues Below
  • What we’ve learned about the computer vs. human battle in chess
  • The hiring decision, interview scheduling, and matching
  • What automation means for the job description
  • Whether there’ll be fewer or more recruiters as automation grows.

It’s embedded below.

Topics

3 Comments on “For Recruiting, Is There Life After Automation?

  1. Great listen!

    I do have one point of contention – it actually is possible for a computer/algorithm to have a bias.

    While humans are most certainly affected by unconscious bias, so are algorithms, perhaps even more so because matching algorithms aren’t even conscious in the first place. 🙂

    In all seriousness, “unconscious bias” (according to this source http://bit.ly/29VLcfK0) refers to a bias (prejudice in favor of or against one thing, person, or group compared with another) that we are unaware of, and which happens outside of our control. It is a bias that happens automatically and is triggered by our brain making quick judgments and assessments of people and situations, influenced by our background, cultural environment and personal experiences.”

    While matching algorithms cannot be biased based on uniquely human elements such as background, cultural environment or personal experiences, that does not mean they are immune from bias. I actually wonder why anyone would even assume that algorithms cannot unintentionally “favor” certain factors or people as opposed to others.

    When I first started reading about people saying we need to use algorithms to eliminate unconscious bias, I knew that was a slippery slope. It made me immediately wonder about “algorithmic bias,” which I didn’t even know was actually an official thing until I Googled it a while ago.

    It turns out that algorithmic bias can exist even when there is no discrimination intention in the developer of the algorithm, and algorithmic bias can also arise simply from the data sources used.

    According to this interesting resource (http://bit.ly/2hh9bx2), “Even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data.”

    With matching algorithms, bias can result from embedded and programmatic “decisions” based on the data set on which it operates (which is arguably never complete, perfect or truly representative), and while not influenced by experience or background as with humans, can unintentionally result from whatever it has “learned” or come to “know” about the data set, which is especially scary because we’re talking about judging the potential match between solely on the text they happened to share when creating their resume, social media profile, application, etc.

    I think this is an especially fascinating area to explore because humans can be aware of unconscious bias and can identify it when looking for it, but algorithms cannot be “aware” of any unintended biases they may have, and humans may not be able to easily identify algorithmic bias as it’s truly unconscious – and in fact, inhuman.

    As mentioned in this article (http://bit.ly/1Xihk12), “With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box.”

    As such, could humans ever be capable of anticipating and factoring for algorithmic bias when there is no real way to know what an algorithm will “learn” and base “decisions” on?

    1. Interesting research. It makes sense. One company that is building these matching tools is using past submits, interviews, and hires in similar roles and comparing those past successful profiles to rank new candidate resumes. In this case, human bias would likely be amplified through the system as the same types of candidates would surface over time. The other bias risk I have seen is companies who are choosing not to use automated matching for fear that they will be liable for the algorithm surfacing unknown organizational bias. I’m not sure where the most bias risk resides…humans or machines in this case. We all know the studies on unconscious human bias. If the algorithm is created based on semantic or network based matching, the match is less about repeating past selections and more about aligning language. In this case, I think bias risk is reduced. I think I would trust this machine more.

Leave a Comment

Your email address will not be published. Required fields are marked *