Quality Hire? Try Hiring Quality, Says One Startup

A Scottish startup launching around the first quarter of 2017 says it will measure what it calls “hiring quality,” encompassing more than just whether an individual was a good hire but something more expansive.

Quality of hire is a coveted recruiting metric, and a number of folks have looked for new ways to measure it, which for talent acquisition presents a challenge when its relationship with the new employee may have ended months before the quality is measured. Quality of hire’s even now on Wikipedia and part of ERE’s new benchmarking tool.

Anyhow, Talenytics says what it wants to do is “understand the overall hiring quality of your organization.”

Briefly, what happens is hiring managers and recruiters define the criteria that are important in a candidate. The recruiter submits candidates to the manager, with some initial scores given against those criteria. During the interview process, whoever is assessing the candidates scores the candidates against the criteria, allowing managers to see who ranks higher and lower, and why.

Article Continues Below

In addition to a candidate quality score, the system measures whether recruiters and managers are happy with the hiring process. So you end up with a candidate quality score, a process quality score, and together a hiring quality score.

The tool began as a tool being used by an RPO for its clients. It’s being rolled out to some beta clients. It envisions customers having at least a five-person talent-acquisition team, with 1,000+ employees in the company.


11 Comments on “Quality Hire? Try Hiring Quality, Says One Startup

  1. While I’m pretty sure I can’t tell from the article or their website how well they predict hire quality, I really like their focus on following up on candidates once they get on the job. We need more of that generally.

    1. Absolutely Tom, its a key part of Talenytics’ capability, supporting our objective to enable true data driven recruitment.

      1. I would be glad to have a conversation around our recent advances in moving the needle on hiring prediction power, and what that means for incremental revenue/hire. We have consortium research (13 growing web companies) showing 50% increments in prediction power. Applied to sales roles, that means $85K of increased annual sales per hire for territory sizes around $1M. For sales folks that stay on average 4 years, that is $340K of increased sales per hire for a cost of under $500 per hire. Now that’s a Return on Talent!

  2. Sounds good in theory, but the way the article reads there may be too much catering to what people want as opposed to what they actually need. Asking hiring managers what they want in a candidate begs the question of whether or not they’re good at hiring successful people to begin with, which is one of the problems with ideas like this. They tend to concentrate on the-customer-is-always-right type of solutions regardless of whether or not the customer is actually right. Unless there’s a rock solid tie between the program’s definition of ‘quality’ and the actual performance and productivity of hires, I don’t see it working too well in the long term.

    1. It’s a good point. We’ve ensured the recruiter is involved in determining hiring criteria and also given the opportunity for global unchangeable criteria to be set e.g. cultural attributes. The really important aspect however is that data is collected on the performance of new hires, fed back in the platform and compared to selection criteria scores. This means Hiring Managers and Recruiters can really see what good (or not so good!) looks like and adapt their selection criteria accordingly.

      1. Not quite my point, but the adaptation is something that’s critical. Is there a way to ‘back test’ as it were, feed previous pre-hire evaluations of existing employees into the system to establish some kind of baseline, or do you have to start from scratch?

        My overall point is, how do you make sure they define good performance correctly? I’ve seen hiring managers move to fire, and successfully fire people, for reasons that have nothing to do with performance. So if, as an example from real life, an HM is psychotically devoted to a dress code and routinely writes up one particular team member because he has the wrong type of collar on his shirt, and then eventually fires the guy despite the fact that his work product was great, how does this system account for that kind of thing?

        And yeah, that happened.

        1. In terms of the pre-hire evaluations, these are recorded against every hiring process and can be compared against post hire performance evaluations. So we can see the difference between what a hiring manager scored a candidate during the recruitment process, and their evaluations when they are in role. We can start to see then how effectively a hiring manager can recruit i.e. can she/he spot good employees or does she/he struggle.
          If a hiring manager terminates an employee then they have to enter the reasons for termination, including rescoring the original selection scores. So it flushes out why someone hired was subsequently fired.

        2. That is a ‘sticky wicket’, using redcoat terms, and if it happened only once or twice, probably not fixable. However, hiring managers who repeatedly fire new hires that score high on the assessment, and thus deliver good work product, would land in a warning zone on a talent mismanagement report. So would managers who repeatedly reject high scoring talent or hire low scoring talent.

          1. Now that’s accountability going both ways, which is what’s needed. Very well done. I would expect though, cynic that I am, that this would lead to change management issues, specifically HMs that want to avoid such accountability and so fight using the tool like a plague. Has hat been the case?

          2. Yes of course – as do some recruiters also! Top level sponsorship and ongoing coaching is essential.

Leave a Comment

Your email address will not be published. Required fields are marked *