Metrics That Actually Mean Something

Picture 2A few weeks ago John Sullivan wrote an article citing a few disturbing recruiting numbers: 70% of participants are dissatisfied with the hiring process; 46% of new hires turned over within the first year (50% for new executives); and top producers produce 40-67% more than others. Sullivan recommended a variety of solutions. One of them included better assessment tools. A few weeks later, Lou Adler wrote an article suggesting that quality of hire was significantly more important than cost per hire. He also suggested a few ways to evaluate source quality based on candidate skills.

I applaud these comments. They have been a long time coming. But in many ways they are like advising Robert Reich to grow taller: easy to say, but doomed to disappoint. The formula for fixing these recruiting problems is threefold: 1) if you cannot define it, you cannot measure it; 2) if you cannot measure it, you cannot control it; and 3) if you cannot control it, you have a 50/50 chance of being wrong.

Defining Requirements

Most executives tend to measure performance by results. But wise ones know by the time results are posted, the activities that produced them are ancient history. Frustration ensues because even executives cannot control performance after the lights go out and everyone has gone home. It must be controlled in the moment.

Do you remember the old joke about the man who prays many years to win the lottery? Eventually, he hears God say, “Give me a little help here? Buy a ticket!”

Well, how does this sound? After years and years of praying for improved employee performance, God finally says, “Give me a little help here? Start accurately assessing important job skills!”

Buying a ticket or accurately assessing candidates are the only thing an employee, manager, or recruiter can control. The rest is out of our control. As a professional psychometrician (i.e., one trained to identify and accurately measure KSAs needed to perform a particular job), I can attest that nothing, I repeat, nothing has a better ROI than implementing an accurate hiring program.

But wait! There’s more! You also get candidates who think it is more professional; it’s exactly what the DOL recommends; every candidate is treated equally; turnover is reduced; individual performance rises; and, training decreases. Can anyone name other organizational program that can do all that?

No one expects you to be an overnight expert, but you can begin by using a basic rule of thumb. Divide job requirements into six general factors (i.e., mental ability required to learn and make decisions, organizational ability necessary to implement projects and plans, interpersonal skills necessary to deal with people, associated attitudes, interests, and motivations, special occupational knowledge required, and essential physical abilities). For each factor, identify the level affecting job success or job failure.

Clear and concise definitions are not optional.

Are you doing this already? A sure clue is relying on position descriptions, hiring managers who cannot agree on whether a candidate is qualified, or hiring managers who insist on seeing multiple candidates so they can compare them to each other instead of to the job.

Measuring Applicants

Picture this: A candidate applies for a job; he or she presents a resume, more than half of which, Sullivan says, contain lies; you have a few hours to ask questions about past job experience (which will probably be exaggerated); you may call a list of pre-screened references (who will probably lie to you); give a test borrowed from a training class (that has no proven link to job performance but people seem to like); get together with hiring managers to argue about whether a candidate is job-qualified (where everyone seems to have a different opinion); and, tentatively ask managers six months later whether they made a good decision to spend wisely tens of thousands of organizational dollars (expecting them to be honest). Sound familiar?

Is it any wonder why senior executives are always re-examining the value of HR? Why external recruiters have trouble differentiating their service from internal ones? Why organizations get sued for unfair employment practices?

What? You don’t use assessments? You only use interviews? You think the DOL guidelines are only for dummies? Keep reading.

Assessment is just a fancy term for measuring or evaluating job skills. Resume screens are assessments, as are application blanks, interviews, reference checks, tests, and photographs. Anything used to identify job-qualified applicants is an assessment. So, as Sullivan recommends, should recruiters use more assessments? Sorry, folks. That is like recommending fish swim more often. If you have asked a candidate a single question, you have used an assessment.

Now that you know you are an assessor, come to grips with the fact that gut feelings and unstructured interview assessments deliver reprehensible results. You don’t need more questions or interviewers. You need better tools.

Step on the Scale, Please

Hiring managers seldom have weeks, months, or years to observe and evaluate candidate skills … only a few minutes or hours. So, how does one get accurate and reliable data about whether a candidate has job skills? It helps to divide assessments into two classes: asking questions informally (i.e., person to person) or asking questions using a formal approach (i.e., pencil and paper or web-based format).

Article Continues Below

For people already familiar with structured interview technology, you can think of it as either asking the candidate to recount a situation, action, and result; or, formally controlling the situation, the possible range of actions, and using a standardized scoresheet to evaluate results. The objective in both cases is the same: evaluate whether candidate performance would lead to successful job performance.

An example of a poor assessment tool is a wife who asks her husband if she is getting fat. A foolish husband will size her up and render an opinion. A wise husband will instantly clutch his side, fall on the ground and scream, “Dial 911! I think my appendix has ruptured!” Measuring body weight based on opinion is irrelevant and dangerous. The same is true of assessments.

In choosing a set of hiring tools, always keep in mind that assessments that predict job performance are not the same as style or type assessments. Legitimate vendors are anxious to show documented proof that their assessment scores directly lead to employee performance. Illegitimate ones are not. Either way, the user, not the vendor, is solely responsible for assessment use.

Now let’s talk about the compelling power of human nature (or, why unstructured interviews persist). As pointed out by S.M. Colarelli and M. Thompson in the September 2008 issue of Industrial and Organizational Psychology, early humans lived in small bands of about 150 people, where physical survival depended on quickly sizing up someone based on face-to-face or word-of-mouth communications. People who made faulty decisions under these conditions often died before they could pass-on their gullibility gene. On the other hand, people who made better face-to-face survival decisions lived to have offspring who shared the same skills. Let’s acknowledge the force of nature.

Although most recruiters are not exposed to mortal situations, how often do you hear them say they want to get to know the candidate personally? Or a hiring manager say they know in their gut if it’s the right person? These are ancient survival techniques unconsciously doing their job. They have no place in recruiting. Once a hiring manager or recruiter decides the candidate is not Hannibal Lector considering whether to invite them to lunch, they should start accurately assessing skills leading to job performance.


Controlling the hiring process does not mean asking managers to complete smile sheets or to rate results (results, if you remember, may or may not be a product of the “hows”). You need managers’ objective feedback about the new employee’s on-the-job hows so you can compare it to the data from your original assessments.

Start by using the six factors I recommend earlier. This model will provide a template for asking questions and comparing actual performance with measured performance. Look for things you might have missed, need to better evaluate, are duplicated, or unclear. Then make informed changes. Improving quality of hire is a TQM process applied to people skills.

Research has long shown choosing people with the right skills depends largely on clear definitions, measuring hows twice, using different assessment methods to measure the same hows, using multiple assessors, and measuring a full range of critical job skills. How do you get started? Well, you could hire a full-time psychometrician like the big organizations do, you might rent a expert for a few weeks to get started, or you might start using the system described above.

This completes the human performance cycle. If you cannot define it, you cannot measure it. If you cannot measure it, you cannot control it. If you cannot control it, you have a 50/50 chance of being wrong.

In other words, there is only one way to get there from here.


22 Comments on “Metrics That Actually Mean Something

  1. Dr. Williams, I respect your willingness to call a spade a spade. I often read articles that are, at their core, nothing more than veiled business development initiatives. I usually chalk them up to how the commerce stream of staffing spend is influenced in our space. Then, I see someone like you showing the Emperor’s nakedness and I am encouraged.

    The elephant in the room is that most consultants evaluate QOH pre-hire . . . and pre-hire only. There is little to no link to performance post-hire. The result? Recruiting Organizations rating their aptitude on QOH before the new hire even starts.

    Pre-hire QOH means nothing if there are no tangible and quantifiable results after the onboarding phase. It’s like Notre Dame recruiting (20) 5-star high-school football prospects and then barely winning .500 of their games.

  2. Great article, Dr. Williams! There is a huge need for TQM-level discipline in the process – and it starts with better, more consistent information about each applicant.

    Quality-of-Hire for me is comprised of: 1) Retention, 2) Performance at 90 days and 1yr, 3) movement – any promo or xfer indicates that the org was willing to rehire the person, 4) manager satisfaction with hire – time to productivity, fit for job, etc.

    These are all post-hire (outcome) metrics — and all should improve if the steps you have recommended in the assessment space are implemented and followed.


    Nicholas Garbis, Sr. Consultant
    Infohrm, global leader in workforce planning and analytics

  3. KSA’s, QOH, CPH, TQM, Time to Productivity, Job Fit, Talent Assessment, etc., etc., etc.

    The best predictor of future success is past performance. Period.

    Demand documented, verifiable, evidence of past performance and you’ll save time, money, and acronyms.

  4. I spend an enormous amount of time on researching predictable job performance indicators – and not being an IOP – most of that time is spent speaking with and reviewing what IOP’s write on the topic.

    I doubt if you will find a more direct, easy to grasp primer on the basics than what Dr. Williams has written today. It is the road map for how to implement and what to include. Thankfully, it will be extremely difficult for companies to incorporate, leaving market openings to those willing to fill the void for them.

    There are a ton of assessment companies in the marketplace – many of them excellent, many of them not. In most cases the successful ones create industry, company and job specific measurement tools to benchmark against for 60-70% accuracy of predicted job performance. The problem is that hiring managers will still revert to the caveman instincts that Dr. WIlliams mentions – ignoring the data in front of them. If there was a source of information that provided both the clear job indicator measurements and the “I love you Man!” feel for hiring managers to review before meeting a prospective new hire – the world of hire turnover rates may begin to decline somewhat…but of course we will have to kill the resume off first…(OK – one thing at a time).

  5. I was able to get a true structured interview implemented for field management with one of my past companies. It truly made a difference in quality and retention. Some side benefits became evident as well.
    -speed of hiring…with the hiring team better calibrated decisions could be reached faster
    -development for management…with a structured interview team members learn what their boss and peers think good looks like
    I have personally used a structured interview when hiring recruiters for years. It helps with the hiring decision but can also be a great benefit in understanding where a recruiter needs to focus on development…helps you start a development plan in the hiring process for your hires.

  6. Note to Dave Pollock…Evaluating past behavior is an assessment. It becomes a better predictor when the past job is an exact match for the future job. Otherwise, how can one explain the good-performer-bad-manager syndrome? PB is just another source of data.

  7. Note to Wendell… My comment was specific to past performance, not behavior. Performance implies a measurable set of goals and objectives with quantifiable outcomes. If you hire someone for a job they have no experience doing why would you be surprised by failure?

    Good-performer-bad-manager syndrome? Otherwise known as GPBMS?? Will HR forever try to broadly identify causal relationships by assessing individual interactions? This may work with huge data sets (and reliabilities/validities that “support” a decision) but in the real world we hire and fire one at a time.

    How about: Past performance is still the best predictor of future success. Therefore, the Director needs to perform due diligence on the Manager’s performance.

  8. I think this is a good point that bears some discussion. Are you saying that you don’t you believe inferring job competencies (by examining only past examples of performance) requires some pretty huge assumptions about what he or she did or said to achieve that performance?

  9. Nice Article Dr. Williams. I am suprised at some of the comments. I disagree that past performance is the best predictor of future success, unless the person is in the same position with the same company with the same level of support and has the same external factors to work with. In my twenty years of recruiting prior to my present career, I can provide hundreds of examples of individuals who did phenomenal at one employer only to fail miserably at the next in a similar role. The point is the best predictor of success is how well does the person fit the role in your company, not how well they fit in someone else’s company or to the industry. Lastly, a person’s behaviors drive their performance. When this is understood and you know how the behavior relates to the performance you need, you can predictably select for performance by evaluating the behaviors.

  10. One of the biggest myths in our space is that the best predictor of future performance is past performance. That being said, it’s easier to sell candidates with a track record of success than those without. Uncertainty is reduced when you can show a track record of success . . . but there’s more to the stew than just the potatoes.

    For example, consider the following:

    a. Salesperson X is ‘unsuccessful’ selling vacuum cleaners door-to-door at $100 USD+.
    b. Same-said Salesperson X is ‘successful’ selling enterprise level software packages at $1 Million USD+.

    OR . . .

    a. Recruiter X is ‘unsuccessful’ in a high-volume hiring environment involving hiring primarily transactional roles.
    b. Same-said Recruiter X is ‘successful’ in a highly targeted hiring environment involving the recruitment of highly tacit roles through relationship selling.

    These are rudimentary examples, yet ones we can all relate to. I have met many Recruiters who would perform sub-optimally in a ‘body shop’ (i.e. org with high turnover and low morale) while they would thrive in a more high-performance environment despite the complexity of the roles they would be recruiting.

  11. It is pretty clear that we could take lesson from Wall Street, “…past performance of Fund Managers is not a predictor of future performance.” The biggest mistake investors make is focusing in on last years fund performance and not on what really drives returns…

    In our people centric world what really drives returns is a knowledge of the six keys to assessment succeess that are outlined at the beginning of this article.

  12. So long as a distinction between behavior and performance is not made, this discussion will continue to compare apples with fruitcakes.

    In addition, goals, objectives, and quantitative results are the keys to my position. I hire people. I don’t offer people up to be hired. In that capacity I get to choose the performance indicators that best match the position I’m filling. Most importantly, the objective data provided along with that indicator is mine to accept or reject as relevant. If these are categorized as job competencies, so be it… but I want to look at the quantitative data related to the performance of that competency, not the test scores that say they have them.

    As for Fund Managers, you can choose the one that best fits your assessment tests and I’ll choose the one that has 15 years of success and one huge failure. We’ll talk again in 10 years.

    Keep in mind that I’m not saying that there is a 1:1 relationship between past and future performance. I’m saying it is the best predictor.

  13. Unfortunately Dave – the one huge failure you’re willing to accept cost us too much – possibly the end of prosperity for many years to come…

    In a company, if it is for a game changing position, the same mistake could sink the business – I am not willing to make that risk if I can help it and that is exactly what we are discussing here – there are numerous indicators of performance that if combined together can get us about 70% there (not just a test) – but with only reviewing what they previously did in some other space, at a different time, with a whole bunch of different circumstances,etc. etc. – I think that we can do better than that.

    By the way from an investment perspective, somewhere I read that, “yesterday’s masters of the universe are today’s cosmic dust…” I don’t believe that any financial advisor can identify in advance the top performing fund managers – no one can – and I would avoid those who say they can do so!

  14. Wendell… I’m saying that the validity and reliability of a test is also never 100%… despite larger and larger sample sizes (and countless tests). I suspect – and I admittedly have no quantifiable data whatsoever to support this – that hires based on quantified performance data have better outcomes. Again – I have no idea how this would be measured, it’s simply a hypothesis based on 30 years of experience.

  15. K.C. – The anecdotes are interesting but not exactly on-point. Frankly, I think you’re underselling assessment test reliabilities. My experience is that .7 is too low an estimate and I’d hate to have to sell an assessment package to an organization at that level. Perhaps Wendell can help us out here with some averages for multiple tests.

    As for Fund Managers and that “whole bunch of different circumstances”, just what is it that makes the circumstances of tomorrow different than the circumstances of yesterday? Reliability is just another way to say, “playing the odds”. And odds are, there will be thieves, crooks, bad legislation, and greed tomorrow too. But individuals will still be investing and I’ll still play the odds-on favorites. “Them that does, is them that knows”.

  16. Fair enough – if I remember correctly somewhere in the archives is an article Wendell wrote about getting a 70% performance assessment accuracy from taking the comprehensive assessment approach he espouses in his article today…maybe he could clarify if he sees this part of our discussion…good luck!

  17. Thanks for passing on Joshua

    I read this last year in the actual magazine and am glad to have it copied in my files from the website or future reference…

    It is a terrific article and spot on for this discussion!

  18. No assessment process is perfect…there are too many unknown and unexpected variables..the best anyone can do is take a Physician’s approach and rule-out the obvious things that can be measured.

    For example, if I know I need X-level of intelligence for a job, I try to verify if the candidate has X-level of intelligence using behavioral interviews, case studies, validated tests, and so forth. I always use the three-bears approach…not too little…not too much…just right.

    The object is to balance the type, number and expense of assessments against the risk of job failure. For example, I might only use BEI with low level jobs that anyone could perform, however, I throw-the-sink at professionals, managers and executives who could cause some real damage to the organization. I’ll also do more assessment and validation documentation if the organization is a litigation target.

    Think of assessments as stairsteps…each one adds more accuracy to the hire: starting with chance (the casual interview) and ranging upward one asessment at a time. In my experience it can rise to about 90% if I clearly understand HOW a job is to be performed and if my candidate passes all my HOW assessments. When that happens I have 90% confidence that my candidate is intelligent enough, has the right kind of interpersonal skills, likes the job, has the right planning skills, and so forth to be successful.

    Now, if he or she goes to work for a jerk or decides to have a mid-life crisis, then all bets are off… good people do not work long for bad managers.

    Finally, managers seldom understand correlation or validity numbers, so I explain it to them this way… While we don’t have performance data on people who don’t pass interviews (e.g., they don’t get hired), we do know that people who are hired quickly sort themselves into a bell curve of about 20% high performers, 20% low and the rest in the middle. This is very easy to see in training programs, sales, turnover, and other situations where the performance is somewhat easy to see.

    A good asessment process tends to shift the mid-point of that bell curve dramatically to the right. So much so that today’s average was once yesterday’s high.

  19. Thanks for the clarification on the level of accuracy you are able to acheive…in most cases that last 20-30% is the most difficult for us – organization, interpersonal, mental acuity and such can be measured – but without knowing if the manager truly is a jerk, or a helicopter manager, or hands off, or whatever – and rarely do we get to assess them beyond using our emotional intelligence to see beneath the layers or veneer that they put up to deflect their inefficiencies. Rare is the leader or manager that shares these (although we probably all have come across one or two along the way and been spot on with the talent we have provided them…).

    Now take it back to a third party talent manager who gets one shot – maybe 45 minutes to an hour – hopefully face to face – but for the mid level roles probably on the phone – to learn these idiosyncrasies- 90% not a chance…but if you’re really diligent to the core assessment principles – can provide stellar people that are competent and with luck of meshing well (we always make adaptability a high sought after trait in addition to the rest)- can successfully marshall talent that perform very well for 2-3 years…when 18 months is the average job length for folks under 35 – its more than passable…(we moved to this demographic when we carefully reviewed the generational gaps in the market…)

Leave a Comment

Your email address will not be published. Required fields are marked *