Does Increasing Interviewing Accuracy Improve Quality of Hire?

Picture 6John Sullivan wrote a great piece on ERE a few months ago, titled Five Ugly Numbers You Can’t Ignore. John’s article pointed out public research indicating fundamental flaws with the interviewing and assessment process used by most companies.

As a result of John’s article, I participated in a series of animated discussion on these ERE pages regarding the relative impact of increased interviewing accuracy on improving quality of hire. Now I know the academics among us get excited when they believe that better assessments directly correlate with increasing quality of hire, but according to the Recruiting Roundtable — a well-respected research group — research suggests this is not actually true.

In a recent public report it compared the impact nine variables had on improving quality of hire and time to hire. Interestingly, at least according to their research, accurate interviewing and assessments had no impact on improving quality of hire. Regardless, it is a gate to pass to get into the game. The top three for improving quality of hire were the need for a strong recruiter and hiring manager partnership, a clear understanding of job needs, and the recruiter’s ability to convert candidates at every step, from prospect to hire. This last point has to do with keeping the candidate engaged, overcoming concerns, presenting the job as a career move, negotiating offers, and keeping the competition at bay.

Leading some credence to the Recruiting Roundtable results is a report from Leadership IQ documenting a three-year survey it conducted with 5,247 managers covering more than 20,000 hires. The big conclusions — 46% of new hires fail within 18 months, with only 19% totally successful. The biggest surprise of them all was that the interviewing methodology used didn’t affect the results. I find this confusing, since I know that conducting an accurate assessment is a necessary, even though it’s not a sufficient aspect of improving quality of hire. The report went on to suggest that managers overvalued technical skills instead of evaluating other aspects of on-the-job performance, including motivation, emotional intelligence, coachability, and temperament. This alone indicates that the candidates were not interviewed properly, and to some degree, puts in doubt some of their other conclusions.

So while there is some data out there that contradicts published research from some of the top names in academia, it’s hard to believe that accurate assessments aren’t important, since without having a qualified and motivated candidate, you’ll wind up with a bad hire. Perhaps the problem is associated with curvilinearity, meaning once a threshold level of sufficient capabilities is met, recruiting skills takeover as being far more important in improving quality of hire.

The academic research does suggest that while a validated and structured interview is important, it might not be all not that important in the overall scheme of things. For example, the often-cited Schmidt and Hunter study reports that the combined correlation coefficient for a structured behavioral interview and GMA test is .63. In practical terms this means that only 36% (square of the correlation coefficient) of the candidate’s predicted on-the-job performance can be explained by these two factors, leaving 64% of job performance due to other factors. I’m surprised that more has not been made of this critical point. By itself, this might be the explanation as to why the Recruiting Roundtable and the Leadership IQ reports that interviewing accuracy has much less of an impact on quality of hire than would be expected.

Taking a different perspective entirely, in some cases, an accurate assessment can actually be counterproductive, especially when good people refuse to move forward until they make the determination the job offers a career move. This is why I suggest putting the necessary assessment process later in the hiring process to maximize the end-to-end conversion rate without compromising quality. Of course, this is a moot point when the supply of quality candidates exceeds demand, a rare situation in normal economic times.

Adding to the supply shortage dilemma, many of the best people — especially those with significant upside potential — are looking for career moves and learning opportunities. In these cases they might not have the requisite skills, knowledge, and abilities, and could be excluded for the wrong reasons. This relates to the classic potential vs. experience trade-off problem.

Article Continues Below

On another level, the relationship between interviewing accuracy and quality of hire is further distorted since they are separate and unequal tasks. If you are using the lens of maximizing selection accuracy as your primary objective, you might overlook the bigger challenge of hiring the best people possible including those who aren’t looking, those who have more potential than experience, those who have a different mix of skills, and those who have multiple offers. Each of these factors requires a rebalancing of the sourcing, recruiting, and selection process in order to maximize quality of hire. This is pretty much what the Recruiting Roundtable results indicated.

When supply is less than demand, a myopic maximize-assessment-accuracy objective leads to the potential for sub-optimization, or sacrificing the whole for the sake of one its parts. I experienced this problem firsthand early in my pre-recruiter career. Many, many years ago, in a place far, far away, I was involved with negotiating company transfer prices with a brake plant that wanted to sell spare parts to Ford and GM, rather than to an internal axle assembly plant, since it got a better prices by going rogue (external). This caused corporate earnings problems since the parent company not only made more money selling completed products, but worse, never had enough brakes to meet the demand for completed axles. It took six months to figure out the problem and develop an internal transfer pricing system to make sure the brake plant did the right thing. This is similar to having accurate assessments, but not enough good people to be interviewed. As a result, you’re left with assessing a population of people without any top performers in it, making the conclusions suspect. Some of the research mentions this as a potential problem with their data.

The way I see it, this apparent assessment vs. quality of hire controversy involves three big issues:

  1. The fact that most assessments – even good ones – don’t cover the complete range of factors involved in measuring top performance. Some of these include subordinate and managerial fit, intrinsic motivation to do the work required, achievement of comparable results, trend and consistency of performance over time, and the ability to work with and influence teams of comparable size, level, and functional makeup. Measuring these multiple times and in multiple ways can increase assessment accuracy.
  2. Ignoring the idea that the assessment is only a subset of the hiring process, not the complete hiring process, and that the linkages are not generally seamless. Just because someone is judged a top performer doesn’t mean the person will be hired. Problems here relate to recruiting skills, the hiring manager’s ability to attract a strong person, the career aspects of the job in comparison to competing opportunities, and the compensation.
  3. Fundamental problems with how the interview and assessment process is implemented and how hiring decisions are made. Problems here generally involve lack of clarity with respect to the actual performance needs of the job, lack of hiring manager training, the use of a yes/no “add up the votes” decision-making process, not using evidence to make the decision, using a narrow band of selection criteria, and over-valuing presentation skills, affability, and intuition when making the decision, among others. Eliminating these is an essential aspect of the hiring process.

Given all of the survey evidence, the academic research, and my own personal experience of dealing with top performers and also-rans over the past 40 years, I would not discredit the necessity of a thorough interviewing and vetting process. However, I do believe that the traditional behavioral interview is far from the perfect solution, and could be a contributing factor preventing companies from improving quality of hire. There are interviewing and assessment solutions available that have been proven to be more accurate, but without better sourcing, a great recruiter, a clear understanding of job needs, and a strong recruiter/hiring manager partnership, you won’t be much better off. In this case, you’ll just be more confident you’re hiring someone in the half that makes the top-half possible.

Lou Adler is the CEO and founder of The Adler Group – a training and search firm helping companies implement Performance-based Hiring℠. Adler is the author of the Amazon top-10 best-seller, Hire With Your Head (John Wiley & Sons, 3rd Edition, 2007). His most recent book has just been published, The Essential Guide for Hiring & Getting Hired (Workbench, 2013). He is also the author of the award-winning Nightingale-Conant audio program, Talent Rules! Using Performance-based Hiring to Build Great Teams (2007).


67 Comments on “Does Increasing Interviewing Accuracy Improve Quality of Hire?

  1. Do you know if the study controlled at all for the case where the problem is with the job rather than the person? There’s a lot else that can go wrong in 18 months.

  2. @Lou
    Very informative one. I agree with you that we cannot discredit the necessity of a thorough interviewing. For organizations, hiring the right people with the right skills for the right job represents an ongoing challenge. There are hundreds of assessments to choose from if the organization wishes to use the talent portion of the overall assessment process. I would recommend that a company must compare several different assessments and their benefits before making a final choice, and must have a sound knowledge of nature of the job. Conducting the talent assessment can help an organization save time and money by helping the company find leadership potential among job applicants. Talent assessment with technology integration can show more better results before proceeding for an interview.

  3. No one disagrees that relationships between recruiter and candidate are unimportant. You make a living at it. But arguing that anyone with a selection-science background knows less than you do shifts the argument from best-practices to a personal level.

    For example, everytime someone like Gallup or the Roundtable presents an interesting group survey, it does not mean we can use that same data to make assumptions about indivduals in the group. That’s called Aristotelian logic. Examples can be seen every day in terms of racial prejudices or profiling.

    Anything used to evaluate a candidate IS an assessment. There is no such thing as an assessment OR an interview. They are both intended to predict candidate performance. Claiming that interviews are more accurate than other tools would be equivalent to claiming pilot skills can better be predicted by an interview than a flight simulator.

    The Schmidt and Hunter study is a meta-analysis..That is, it takes a wide range of individual studies and examines them for trends…It is an average of averages designed to show relative relationships… it’s not the 11th commandment of selection tools. Even small correlations can have a big difference on the job…you can read up on this effect in the Taylor Russell tables:

    If anyone involved in making hiring decisions wants to improve their success ratio, the answers are not secret. They are clearly spelled out in the Guidelines and Standards for all to read.

  4. Wow – you’ve packed quite a lot in one article…boiling it all down, it seems that based on the reports and all of the other factors you site, that assessment is less important than the actual process of getting the hire made.

    Having spent the last 15 years in retained search, I would agree that interview of functional ability is much less important – we only entertain people already doing the role, typically from a competitive company, so the idea is that motivation and interest is a much bigger issue than capability – which is usually a given. The problem with this conclusion is that there is a huge world out there of all types and versions of recruitment processes used. It is so diverse, that it is hard to take the two reports you site very seriously. Although I am certain that the methods that both groups employed to get to their conclusions were sound (I have a ton of respect for both organizations), it is impossible in my view to break out a definitive set of parameters from 5,000+ managers who employ different methods to get a result (not usually an effective one at that as you site…).

    The fact that 46% “failed” in 18 months is hard to attribute to any specific set of factors. Particularly when many studies indicate that the Net Generation (16-32 yrs.) average only 18 months or so in a job anyway. The 19% that were “totally successful” seems like a very dubious measurement as well – what does that mean? Who decides what that is? If anything, to me it correlates with the thinking that only the top 20% are successful anyway – and that is what is probably being measured there…

    In my experience, the assessment IS the only measurement important in recruiting – as long as you’re measuring mental acuity, attitudes, interests, motivations, creativity, emotional intelligence, organization, presentation – and all the rest… Something we wholeheartedly agree on Lou is that without the knowledge of whether a person can DO the job (not just functionally – all of what the rest) – does any of the managing candidate motivation, offer acceptance and expectation management, or managing the hiring manager expectations and all the rest of the recruiter/hiring manager partnership even come in to play… So its hard for me to agree with the conclusions of the reports in your post. Assessment is the key ingredient to the entire process of making a successful hire – the recruiter part in it makes it happen – but “on the job success” is based on how well we all assessed the “fit” for the new hire.

    It’s kind of like the recent energy debate. Is it solar or wind or biofuel or nuclear that we should bet on to replace fossil fuel – or is it all of the above and then some. I check the “all of the above” box on this one.

  5. The problem with a lot of the research conducted outside of peer-reviewed research journals is their methodology is highly suspect. I don’t know how Recruiters Roundtable generated the results they did (the brief summary report you link to doesn’t describe it), but the Leadership IQ study you point to is classic: it relies on survey results.

    The high quality research, such as the Schmidt & Hunter study you mention, uses statistical methods to correlate actual test scores with real measure of job performance.

    The issue is not about whether to use an assessment or not, or whether to interview or not–it’s about the best way to interview. I/O psychologists don’t claim that a structured interview will explain 100% of performance. But 36% is a heck of a lot better than 0%.

    I do agree that the domain of assessment needs to be broadened, and I think your call for consideration of other types of assessments is right on.

  6. At best, the conclusions that have been drawn from these ‘studies’ are dangerous. The Recruiting Roundtable link gave no insight into how they concluded interview type and assessments were not helpful. What type of interviews and assessments were compared? How were they developed? Were they appropriate for the position? How was post-hire performance measured? Maybe the two-page overview linked in this write-up wasn’t the correct article.

    The Leadership IQ ‘study’ is perhaps even more troubling, with Mr. Murphy’s (the CEO of Leadership IQ) conclusion that “Highly perceptive and psychologically-savvy interviewers can assess employees’ likely performance…”

    The last thing I would want in my organization is a talent acquisition team full of armchair psychologists who, in Mr. Murphy’s words, “accurately read and assess candidates.”

    Further, Mr. Murphy’s assertion that “Hiring failures can be prevented” is disingenuous. If this were the case, I would expect his organization (and those for whom he provides services to) to not have experienced any negative attrition. Certainly that claim can’t be made.

    The correlation cited from one of Schmidt and Hunters studies of .63 would in fact have very meaningful and practical ROI in many, if not most, organizations. Also, the variance accounted for would be 40% (not 36%). Human behavior is extremely complex, and accounting for this chunk of variance is not trivial.

    There are dozens of variables that a successful, world class recruiting and selection process must possess – including strategic sourcing, hard working recruiters and knowledgeable human resource personnel. Structured interviews designed to objectively measure applicant attributes together with scientifically validated assessments also play a critical role in employee selection that have exhaustively shown results through evidence-based research. Further, there are many variables including economic conditions, currency fluctuations and labor situations outside of an organizations control.

    I would recommend evaluating rigorously conducted research conducted by professionals without a product to sell when evaluating how, when and what to expect from implementing a particular interview or assessment strategy.

  7. Dangerous is right. My concern is with conclusions that others will reach on the basis of the information contained herein. In actuality, the points made in the last paragraph are dead on – a well designed selection process is only part of a successful recruitment process that also requires a significantly large and talented applicant pool and recruiters who manage the recruitment process and applicant relationship and have solid interview skills that add value to the selection equation. Why the article needed to start with an attempt to discredit assessments is not clear.

    There are also some other portions of this article that dont exactly add up. For instance, what does the author mean by the word potential and how is that measured in the recruitment process? To me, a focus on “potential” would mean considering applicants who don’t necessarily have experience in the job or job-specific competencies, but do have the raw materials to succeed. These raw materials are basic, underlying KSAOs like general mental aptitude. I also dont understand the logic of the argument that assessments can be counterproductive because good candidates will not move on in the process until they know they really want the job?

    Advocates of “selection science”, such as yours truly, would be the first to tell you that assessments have limitations and their contributions to hiring quality is dependent on several other factors. A few general principles that are worth sharing along these lines:

    1.ROI for selection is strongly influenced by the available talent pool in terms of quantity and quality.

    2.Selection also only works well when your top candidates actually sign on and stay. To make sure this occurs, you need strong recruiters who manage the relationship and sell the organization.

    3.In addition to managing the relationship, recruiters (with strong interviewing skills) can add substantial value to the selection equation and hiring quality. However, it needs to involve a structured interview and process. Notions that intuitive human judgments are good predictors of later success are seriously in error. A large body of research exists that shows(see Dawes, 1979 American Psychologist for an actual controlled research study) human intuitive judgments of all kinds are not predictive of outcomes and far inferior to a rule-based system (based on statistical modeling) such as selection science. This research should be as well known as the oft cited Schmidt and Hunter paper.

    4.There are times when we just have to make a hire and have to take the best of a less than stellar pool. Still using top-down selection with validated assessments is better than any other alternative.

    5.We should consider multiple selection paths for the same job – one for experienced and one for inexperienced. We should not overlook inexperienced candidates who may possess the raw materials to succeed. Assessments are best equipped to measure “potential.” Potential in this case would be the basic, underlying qualities, skills, and abilities known to predict success on the job.

  8. The point of the article was to suggest that assessments are not as important as those in the assessment business believe. This is not to discredit them, just to suggest that they are part of system, and while important, not all that important in improving quality of hire. In fact, as stated in the article, using them can eliminate some of the best candidates from consideration through sub-optimization. This is the problem I’ve seen time and again over the past 30 years.

    Also consider that GMA test in combo with a structured interview has a correlation coefficient of .63 meaning only 40% (correlation coefficient squared) of on-the-job performance is explained by these two factors. It also, means that 60% is unexplained. This is a HUGE missing piece. Shouldn’t this caveat be highlighted at the beginning of every article, not put in a footnote that can’t be found? It’s these non-assessed factors that are the primary causes of underperformance – lack of cultural fit, lack of managerial fit, and lack of intrinsic motivation to do the real job in the real environment, not the KSAs.

    Now get this. According to the same Schmidt & Hunter meta-analysis, the correlation coefficient of an unstructured interview plus a GMA test is .55. Squaring this to obtain the coefficient of determination is 30%. This means only 30% is explained by these two factors vs. 40% with a structured interview. Adding a structured interview to the GMA test only adds 10 percentage point improvement in confidence. Think about all of the money that is spent to get this measly 10 percentage points.

    I believe the Recruiting Roundtable report is based on bad science (I have looked at the details behind it) and I also believe that the Leadership IQ survey is equally questionable. But, I think we all have to honestly consider that the traditional BEI is flawed. For one thing it could worsen quality of hire through sub-optimization and given the cost of implementation and enforcement vs. the gain in accuracy, it’s a bad business process. There are alternate ways to increase assessment accuracy and improve quality of hire. Putting efforts into a fully-integrated solution is where the efforts should be applied, not defending something with marginal benefits at best.

  9. A few things:

    1) You presume that “those in the assessment business” are a unified body. We are not the borg, nor do we claim assessments are the be all and end all of hiring.

    2) I totally agree that poorly designed assessments can slow down the process, causing us to lose good candidates as well as not do a good job of predicting performance. That’s a great reason to make sure your assessments make sense.

    3) We can go back and forth about the percent of explained variance, but here’s the thing: the reason it keeps coming up is it’s the BEST SCIENTIFIC EVIDENCE that I know of showing a research-based relationship between recruitment/assessment practices and job performance. Shouldn’t our discussion and consultation be based on evidence, not anecdotes?

    Point me to equally good research (i.e., meta-analytic) that demonstrates that things like fit (which inherently has to do with assessment), motivation, recruiting practices, etc. are statistically related to job performance, and I’ll eat it up.

    It’s unfortunate if “we” come across as overly selling assessment; I certainly don’t intend to. But nor do I think it should be brushed aside in the name of psuedo-science. The reality is when it comes to HR, there’s a lot of great research on assessment. The recruiting side of the house is woefully–embarrassingly–under-researched.

  10. Bryan – re: best science point. What if the best science is none to good?

    I think there is much more to be done here, and as far as I’m concerned the science offered isn’t that good. As a recruiter who must guarantee each placement for one year, it falls far short. Every other scientific discipline has progressed dramatically in the past 20-30 years, yet the field of assessments seems to be stuck in a time warp. Let’s develop some science around the 60% not covered, rather than the same 40%.

    Perhaps the anecdotal evidence, might lead to better science, so to ignore it, seems hedgehog-like. (This as reference to Isaiah Berlin’s essay re: “The fox knows many things, but the hedgehog knows one big thing”.) After you read this summary you can categorize yourself into either camp, but as far as I’m concerned most people in the the OD/Assessment field are clearly hedgehogs. Here’s the link –

  11. Let me see if I have this straight.

    Lou would like ERE readers to believe that a relationship-based recruiting process (which, incidentally, he happens to be selling) is far superior to anything recommended by the DOL Guidelines; to the APA Standards for Educational and Psychological Testing; to job analysis-based, validated, multi-trait-multi-method systems developed by huge consulting companies like DDI and AON; to the internal systems of many of the Fortune 1000 such as ATT, Sears, Home Depot, Sprint, and so forth; to peer-reviewed research going back at least 100 years; and, to the teachings of approximatly several hundred terminal-degree granting universities? Furthermore, he bases this opinion solely on experience with his company’s 12-month guarantee?

    Did I miss anything?

  12. Lou – there is no doubt that snake-oil products exist in the assessment world. I’ve seen many, many instances where assessments provided negligible value toward predicting quality of hire. I’ve even seen several instances where assessments provided negative value (sub-optimization as you choose to call it). No organization is immune to the improper use of tests – having seen misuse across government and private industries across many verticals.

    However, over the last 15 years I’ve also been involved with dozens of evidence-based utility analyses showing the direct impact of well validated selection tools that provided seven-figure ROI. Let me repeat – the improvement in employee quality based directly from scientifically validated assessments had multi-million dollar positive impact annually. These results were almost exclusively yielded through customized tools developed, validated and continually improved by professionals for use in a context where the myriad variables impacting assessment usefulness (see Mr. Williams’ links) are well understood. Further, these returns were seen without accounting for greater than 25% of the variance.

    I would be highly skeptical of anyone claiming to account for upwards of 40% of the variance in human behavior as defined by job quality without a (prohibitively) exhausting assessment process. Even when an exhaustive process is executed (e.g., fighter pilot selection) and combined with millions of dollars in training – success is NEVER ensured.

    Please feel free to reach out and I will share an overview of previously conducted utility analyses. There is absolutely no argument interview and assessments are only one arrow in the quiver of optimized selection. But, with careful scrutiny, these can be the most scalable, affordable and profitable parts of the integrated solution.

  13. Absolutely let’s focus on the other factors that help us predict performance. But let’s do so in a rigorous scientific way.

    Maybe the reason why some assessment people seem to you to be hedgehogs is because that’s what they do for a living (although I think you’re overgeneralizing). Much like how your own occupation no doubt influences your perspective.

    And although I take issue with your claim that assessment hasn’t progressed in 20 years, as I’ve said in earlier comments, I’m not sure what we’re arguing over here. More research into factors influencing performance? A focus on how we can best serve our customers? No argument here!

  14. On more note, Lou – the statement that “Every other scientific discipline has progressed dramatically in the past 20-30 years, yet the field of assessments seems to be stuck in a time warp” reveals a lack of understanding of other scientific fields.

    Medical research, for one, continues to use a scientific method very similar to those employed in the assessment world. Any guess as to the improvement in ‘variance accounted for’ from many pharmaceutical interventions today and twenty years ago?

    Today, some blood pressure drugs (the second most commonly prescribed drugs) are approved and considered a great successes if they have correlation with improvement in the .2 to .4 range in clinical studies. Yes, that translates to 4% to 16% of the variance. Following your logic, we should abandon these approaches. Read the small print on your prescription inserts for more proof. (fun fact – most blood pressure meds prescribed today were developed thirty years ago – so much for progress).

    Like prescription drugs, those considering the use of assessments should know what they are using, efficacy of previous trials, and noted side effects.

  15. Let me restate the primary premise of this article –increasing assessment accuracy doesn’t directly relate to improving quality of hire. This gets at the need for strong sourcing, great recruiters, some type of meaningful job analysis, hiring manager competency, compensation, company culture, etc. The assessment process is a subset of a bigger system (hiring top people) and all sub-systems need to work in harmony to maximize overall quality of hire. When one of the sub-systems compromises the overall system results it needs to be modified accordingly. Some examples include a fully-qualified top performer who is casually looking who voluntarily opt-outs due to the bureaucratic nature of the tool, or when a high potential person with a different mix of KSAs is inadvertently excluded. Without considering these higher-level system-level affects, focusing on assessments misses the forest for the trees. (Wendell, my first job was as a systems engineer on the Minuteman missile guidance system and we often had to compromise on accuracy vs. weight of the system. Note: I was better at finding these people than doing the work.)

    A point regarding the missing science. I find it interesting that the people who get promoted internally and succeed often would not have been hired in the bigger role if they applied externally. This makes the case that some new ways need to be looked at the are more comparable to the internal assessment process since this is much more accurate than the external assessment process. Why do some people insist on relying on old techniques when there are so many obvious flaws?

    The second point of the article is that the structured behavioral interview is an insufficient and flawed tool and doesn’t cover the full dynamics of most jobs. This is the 60% credibility gap I find bothersome. The companies that Wendell cites as proof of using the tool, could also be used as proof as to why the tool shouldn’t be used. They’re not top performers and they’re not representative of complex jobs.

    The third point of the article is that I support the point of increasing assessment accuracy using validated tools. (The only tool I don’t like is the structured BEI.) As long as conversion rates are high – those starting and finishing the assessment – I’m okay with inserting it anywhere in the process that makes sense. I’m also willing to test out any and all assessment tools and recommend them to my clients if they work.

    Now for some other tidbits:
    1. Old science that works is fine with me as long as it continues to work. Airplanes still fly close to 100% of the time, but we would want some new science if they only flew 40% of the time. And even though they still work almost 100% we still make them better.
    2. The DOL is not considered representative of the latest thinking.
    3. Wendell, why do you think I base my opinions solely on offering a 12-month guarantee? This was just my catalyst for seeking better ways to increase assessment accuracy. In the process I found a lot of better ways to do this. I also found many other recruiting firms who offered a similar guarantee and they developed other advanced assessment techniques that worked for them.
    4. Tommie – are you aware the Lipitor is advertised as reducing strokes and heart attacks by over a third? Yet when you look at their “science” it’s based on a reduction of 3 occurrences out of 100 to 2 out of a 100. Yet, they advise that all 100 people with high cholesterol take their pill daily. My doctor wasn’t even aware of this, and as a result has been much more cautious about prescribing it. There was a large Business Week study about three years ago about the flawed science of medical testing.
    5. Wendell, why do you suggest that I discount a thorough and validated assessment process including many of the points you raise? I just don’t like the standard structural BEI that isn’t linked to some direct measure of performance in a comparable environment. In fact, we’ve found many ways to incorporate all of this in a user-friendly format, which has been validated, approved and implemented. I also know that no matter how good the assessment process is, it means nothing if the best person isn’t hired at some reasonable level of comp.

  16. Tommie – I just noticed this on CNN moments after the other post:

    The diabetes drug Avandia is linked with tens of thousands of heart attacks, and drugmaker GlaxoSmithKline knew of the risks for years but worked to keep them from the public, according to a Senate committee report released Saturday.

    Also, I just came back from visiting my older brother yesterday at the hospital. He had liver poisoning from too much Tylenol which his Dr. told him to take to reduce the pain from another surgery.

    I think it’s appropriate that we begin questioning the so-called used science we use to make all types of decisions.

  17. Okay, maybe it’s the big lunch I ate, but I’m still having an issue with your premise as you state it: increasing assessment accuracy doesn’t directly relate to improving quality of hire.

    I’m just not understanding how that is supported. We have research that demonstrates the exact opposite. Perhaps you should qualify it with the point you made about curvilinearity?

    This whole debate seems like a wonderful opportunity for a webinar or other public discussion.

  18. If I understand Lou correctly, I think he’s saying it’s like a sales pipeline: Lead scoring is useful, but better lead scoring won’t improve the leads you get, the product you sell, or the people who sell it.

  19. I think that Lou has a point that current assessments are outdated, at the same time I think that Tommie offer the solution – customization.

    When we at HireLabs were receiving our money from VCs, we were asked “You have 140 different competitors, what makes you think that you will succeed?”. So we went back to the drawing boards at Stanford University, and emerged a couple of months later with the answer – HireLabs will offer customized assessments for any position.

    The next question from our investors was “okay, so how are you going to provide validation if you continuously create new assessments?” Fortunately we anticipated this question and went in with 5 validation studies that we had conducted. Our findings were very interesting:

    We realized that in order for an assessment to work, it needs to take into account the JD, but more importantly, before even looking at the JD (which is usually outdated), we need to have a quick 20 min phone call with the line manager and do a JA. Now that we know what is needed for the position, we can begin to design a customized and more relevant (not perfect) assessment.

    One of the studies that we conducted was for an oil company that was going to send its telecom engineer from their office in Amsterdam, to their rigs off the coast of Nigeria. After we had conducted the JA, and studied the JD, we understood exactly what we need to assess for. Since Nigeria is a politically unstable country (lots of gun shots heard at night), and because the candidate pool was from Amsterdam, we would need to test for 2 main behavior traits:
    1. Ethnocentricity – if the candidate was ethnocentric, he/she would not survive, and it would hurt the productivity of the company (ROI)
    2. Ability to work under a stressful environment -if the candidate could not acclimate himself/herself to the pressures of the environment, then it wold hurt the productivity of the company (ROI)

    We also included 5 other skill-based tests.

    All candidates who took our assessment , also took a generic assessments offered by a competitor. Non of the applicants were told of the job location before taking the assessments.

    The applicants who scored high on our assessments did not do too well on the competitor’s assessments, and those who scored high on the competitor’s assessments were not too keen on taking the position once they found out that it was in Nigeria. However 8 of the 11 who score high on our assessments did not have reservation towards working in Nigeria.

    You are all smart individual, I am sure you can draw a conclusion.

  20. Lou…May I suggest admitting you know little or nothing about psychometrics except for what you read in a few wrongly-interpreted studies. Using this venue to convince people that you: 1) know what you are talking about in psychometrics (which you clearly do not); and, 2) are the only person who has a better selection product (which you clearly do not) is a complete misuse of the forum granted to you by ERE. Your hubris at offering to give a keynote presentation at a professional technical conference for which you are totally unqualified is even more embarrassing.

  21. Also, here’s a politically-incorrect question: to what degree are assessments hobbled by the fear of plaintiff’s counsel?

    The infamous Microsoft/Google interview questions, application forms that ask for SAT scores, even which college(s) a person attended, for instance, are all pretty much proxies for IQ. They’re all imperfect, though, particularly the last, which is highly correlated with socioeconomic status, for instance. They’re also ubiquitous, while direct IQ testing is effectively illegal for everyone except the government, which is a pretty good sign somebody thinks they work.

  22. Bryan – re: the great debate, see below for the online webinar link on March 25th.

    re: quality of hire vs. quality of the assessment. Let’s consider LinkedIn. There are 60mm names, 20% of these are looking for a job, or 12mm. 48mm are not looking. In general, but not always, one would suspect that there are more top performers in the group not looking. To contact these people takes exceptional sourcing and recruiting skills. It takes a lot of work to just get them to be put under the microscope. Now while an accurate assessment is essential, there’s still more work to be done – negotiating an offer, keeping it closed, fighting off the competition. Plus, without a strong hiring manager the person is likely to opt-out early in the process. This is what I mean that increasing assessment accuracy doesn’t imply you’re maximizing quality of hire, it just means you’re accurately assessing competency of the group you’re considering. Putting the best people into the top of the funnel and keeping them involved step-by-step until they get hired and on board is the big game. The assessment is part only part of it and getting it right is critical.

    Here’s a link to the online debate with all of the details. Wendell you’re invited to participate as a panelist. I admit I’m not an expert in psychometrics, but I’m sure you’ll be able to show me the error of my ways. Link to the great debate ––does-interviewing-accuracy-really-improve-quality-of-hire&option=com_eventlist&Itemid=89

  23. Thanks for clarifying, Lou, To the extent that your point is the results of assessment are dependent upon the candidates being assessed, I don’t know that you’ll find anyone that would disagree. Nor would anyone likely disagree that passive recruiting skills, communication, branding, etc. is key for attracting and managing talent.

    It’d be great if the webinar topic would be expanded to talk about evidence-based recruitment/assessment, however.

  24. Lou – your points around the effectiveness and side effects of drugs are exactly my point.

    Just as it is the patient’s responsibility to know what they are ingesting, it is the organization’s responsibility to know about the assessment tools they are using.

    Medical professionals and I/O professionals have a responsibility to give their patients/clients full disclosure on their prescribed remedy in a language that can be understood. I’m shocked your physician didn’t know the efficacy of Lipitor. That is as embarrassing as a selection scientist ‘guaranteeing’ their solution will offer a panacea to an organization’s quality problems. (I’d shop around for a new MD).

    Just as a cholesterol drug will have little impact by itself without other critical lifestyle changes (e.g., diet, exercise) – an assessment or interview intervention will have little impact in absence of other critical tools and processes (e.g., optimized sourcing).

    I look forward to the future discussions that bring to the surface these important debates. Anyone purchasing recruiting and selection consulting owes it to themselves to be an informed consumer.

  25. Note to all: I’m beginning to do some work on better quantifying the experience of military vets. This relates to another point on a problem with sub-optimization of using traditional BEI and KSAs.

    For example, let’s assume you need to hire an accounting manager with five years of supervisory experience and strong international accounting ability, plus a CPA. One of the people who applies is a 3-year CPA who was the senior on an international consolidation project, but had no corporate-level managerial experience? However, she went to Iraq after ROTC for two years and held some type of field command. Using traditional KSAs and knock-out questions she would typically get excluded from consideration, since her military experience wouldn’t count.

    Assuming she was the best candidate, this would be an example of sub-optimization, and pretty typical of what goes on everyday. Using traditional BEI and KSAs, comparable experience is not considered and fast-trackers, generally the best of the bunch, get excluded since they don’t have enough “experience” – at least on paper.

    This is also a problem when hiring diverse candidates, since they typically don’t follow normal career routes. Many young people don’t either, especially these past few years, so they could be overlooked for the wrong reasons. Alternatives are needed that open up the door to these fully-qualified, but non-traditional, people.

    Recruiters are part of the solution and this is what much of my talk at the ERE Expo in San Diego in March will be about – how do you find and get the best candidates into the game despite the bureaucratic bottlenecks?

  26. Lou, et. al.,

    As I read through this back and forth correspondence, my feeling is that we might be arguing about different aspects of the recruitment process. I doubt anyone here would argue that anything you can do to expand the applicant pool, particularly in terms of quality, will greatly increase the quality of hire. Indeed it is probably more important than improving assessment accuracy. But you talk about the “best applicants” as if they just readily announce themselves and stand out for all to see. You also make some general claims about this population, e.g., that they are more prevalent amongst a group of non-applicants than applicants, without any real support for your argument. Setting aside your disdain for what most of us consider to be convincing evidence of assesssment utility, it seems just plain logical to think that anything that can aid us in determining who these best applicants are (i.e., improving assessment accuracy)and thus devote more resources and time towards their recruitment, would significantly improve quality of hire.

    A couple of other points worth considering:

    40-50% variance accounted for may be the best we can ever do. This is due to the inherent unpredictability of human behavior and more importantly, the organizational and external factors that also impact job performance. An individual’s job performance is influenced by both person factors (KSAOs and all the other unmeasured factors Lou mentions) and organizational factors (e.g., quality of supervision, pay for performance, etc.). In the recruitment process we can only really measure person factors and hence we will always be missing a significant part of the equation.

    The suboptimization you continually speak of is referred to as Type II error by us assessment geeks. Type I error is typically the focus as it reflects a situation where a mishire occurs, while Type II is where we miss out on an otherwise qualified applicant. Paradoxically, anytime you increase hiring requirements and thereby reduce Type I errors, you also increase Type II errors. Indeed even the best assessment process will miss out on high quality applicants, but I challenge you to find me a process that doesnt do the same to some extent. We are quite literally playing the odds with our selection systems. However, it is a process that works if you consistently apply it. There is another real world application of odds playing, perhaps you have heard of – gambling. I am pretty sure that Vegas has found this system to be quite effective.

    Past job performance in a related position is a better indicator of future performance than even the best assessments of KSAOs. This is why we are able to find internal applicants who will be successful even though they dont have the ideal mix of KSAO scores. This doesnt mean, however, that we should abandon use of KSAO assessments. For one, having relevant and reliable indicators of past job performance is a rarity and two, we need some means of assessing external applicants.

    Accounting for even 10% of variance in job performance is huge, financially speaking. I would be more than happy to share with you the utility calculations that show how much this can impact the bottom line dependent upon the hiring ratio (again where the size of the applicant pool truly matters), financial impact of individual increases in performance, etc.

    We should continually strive to measure new and important constructs, several of which have been mentioned herein. However, there is a good reason why this has not occurred to a large extent. Namely these constructs are often ill-conceived, difficult to measure using any conventional techniques, and demonstrate little to no relationship to job performance. Constructs, like emotional intelligence, have great commercial appeal but have demonstrated little utility within recruitment and selection. If your interest is in making a few bucks then this will certainly serve you well, but if your interest is in truly moving the needle, then it is better to stick to tried and true measures (like GMA or even structured interviews) with known relationships to job performance. In terms of progress, our field has developed novel measurement methods including fully-dynamic computer simulations, conditional reasoning measures of personality, etc. However, at the end of the day, we continually find that a few key constructs drive all the variance explantion in job performance.

  27. Excellent points, Gunnar.

    To Lou’s point about “KSA methods” and knocking out applicants: what you describe is a problem with how the screening process is being applied, not with a particular assessment method. Granted, this is an unfortunate method that many employers use to narrow the field.

    Part of the problem, which I don’t think we’ve addressed yet, has to do with efficiency. Particularly in the case of large employers (and especially in the public sector), they may not have the time or resources to consider things like comparable experience or communicating with every single applicant on a one-on-one basis. (Although there are technologies that make both of these much easier) Many screening methods are used in the name of efficiency.

    Perhaps this discussion might best be directed toward coming up with clear, usable suggestions for employers both large and small to optimize the recruiting and hiring process.

  28. @ Lou regarding bureaucratic bottlenecks:

    We at HireLabs work with governments in the Middle East, Africa and Asia, so we know how to ease bureaucratic bottlenecks. Let me know if you want to have this conversation offline, where I can share with your our experiences and then you can incorporate them into your environment.

  29. Gunnar – I have just started reviewing your eloquent reply and upon further study I will probably agree with much of it. However, I do have one concern you mentioned early in your post – “In the recruitment process we can only really measure person factors and hence we will always be missing a significant part of the equation.” Why do you conclude this point? Your conclusions seems to defy the idea of improving the science and instead accept the status quo as the best there is. This is the underlying problem I find disturbing about those in your field. As far as I’m concerned this is not good enough and more a cop-out than an explanation.

    To me, getting better should be the goal. At my firm, we have found a way to model all aspects of a job and have found techniques to measure them in a logical and validated way. This is written about in my book Hire With Your Head, which did make #3 on the Amazon best-seller list when it came out. Read the 57 reviews to gain a sense of the impact it has had on companies around the world.

    Some of our clients who would dispute the conclusion noted above include Cognos/IBM, the YMCA, one of the top financial institutions in the country (ask Dr. Charles Handler who did the validation study), the City of Edmonton, Cancer Centers of America (ask Trudy Knoepke-Campbell for the validated data), the Washington State Dept of Transportation, Bose, Biosite, DaVita, Sodexho, and the Inter-American Development Bank, to name a few.

    From my experience the problem is not the methodology, but the implementation. But when you start with understanding how the best people make career decisions, look for jobs, and compare one opportunity to another, you find new ways to improve your odds of hiring the best person available, rather than the best available person. You also get the hiring managers involved from day one, which is something the OD folks seemed to have overlooked or ignored.

  30. Lou,

    I probably do have a more cynical, or I would prefer, pragmatic view of the situation. But I am not trying to cop out on making improvements. I probably did a poor job of explaining person vs. organizational factors and their impact on job performance. Let me try and restate. What I am saying is that there are factors outside of the control of the individual and not reflective of anything to do with their own makeup that impact job performance, like quality of one’s supervisor. These are not individual differences and are not part of the evaluation of recruits. So when we are trying to look at variance explained in performance, these organizational factors surely account for a very large portion thereby limiting the percentage of variance that any sort of recruiting tool could possible ever account for.

    My cynicism is based on the experience of new methods or constructs being introduced as the latest and greatest, only to find that either they explain little to no incremental variance in performance beyond existing tools and/or that they actually measure something that was already being captured. For instance, people mistakenly believe that they are measuring whatever label they place on competencies included in competency-based interviews. However, when you break it down, interview scores and their predictiveness are driven by a few common factors like interpersonal skills, likeability, etc. regardless of the content of the interview. All that being said, I would be interested in hearing about your techniques and results you have seen and would do my best at putting aside my cynicism. I promise. 🙂

    I see considerably more opportunity in regards to the points you describe in your 3rd paragraph about implementation specifically being more adaptable to changing circumstances of the applicant pool, job requirements, etc. There is a lot of opportunity around briding the divide between sourcing/recruiting specialists and us assessment geeks. As assessment specialists, we often begin with the premise that we need to sort through what we are given, instead of helping with improving the overall quality of prospects and increasing the likelihood that good applicants remain. We need to do a better job of understanding that part of the equation and partnering with those who drive the pool. On the flip side, I do believe recruiters/sourcers would be well advised to also become more educated on the realities of assessment, both limitations and benefits. My experience has been that those with little insight into assessments often play the role of monday morning quarterbacks, utilizing anecdotal evidence to discount assessments while disregarding the benefits achieved on the whole. This would be another case of missing the forest for the trees.

  31. Gunnar – I’m quite impressed with how you state your case and present your viewpoint. It is rational and well presented.

    Re: your managerial fit comment. This was one of the key factors we found that prevented us from effectively offering a one-year guarantee. As a result – and at the suggestion of a few clients – we examined what Blanchard and Hersey were doing in the area of situational leadership. As a result we now evaluate managerial style of the hiring manager and the coachability style of the candidate in order to improve their working relationship. Profiles International then developed an assessment questionnaire to better understand this factor. If you send me your email address, I’ll send you a copy of our 10-factor talent scorecard. This will allow you to see the guidance we provide to ensure more accurate evidence-based rankings for the 10-factors we’ve found best predict on-the-job success. We will be funding a PhD program to further validate these results. Your insight and candid comments on the 10-factor scale would be welcomed. (

    Aside from managerial fit, we find intrinsic motivation to do the actual work involved as a key driver of performance and job satisfaction. As part of the job analysis we clearly understand the core challenges involved in the job and the process needed to ensure high performance.

  32. I wonder if someone would define quality of hire? If I missed the definition, I apologize. If not, it seems that the debate about the accuracy of “assessments”, regardless of form, should begin with a clear understanding of what Lou means by quality of hire.

    I believe that deficiencies in clearly and accurately defining quality of hire, job performance, etc. underlie a good portion of the problems that have plagued selection science.

  33. Hi Brent..In my experience defining quality of hire is like defining the perfect meal…There are many pieces- parts that include both intrinsic and extrinsic factors…Intrinsic factors represent skillls brought to the job by the candidate. Extrinsic forces represent uncontrollable forces that either help or hinder employee intrinsic skills.

    Intrinsic skills include motivational factors, attitudes, special interpersonal skills, special problem solving skills, special physical skills, special technical knowledge, and special organizational skills. These are widely researched. They arrive in the morning and go home in the evening.

    Extrinsic forces include things like manager-fit, on-boarding process, organizational climate, unexpected life events, and so forth. These are also widely researched, are almost always moving and changing (and somewhat unpredictable.

  34. Hi Brent…defining job performance is like defining the perfect meal…too many pieces-parts: the appetizer, the manin course, the desset, the company, the price, the ambiance, the waiter, and so forth.

    Both intrinsic and extrinsic factors also affect ratings of job performance…Intrinsic ones include KSAO’s brought to work by the employee everyday. Extrinsic factors include things like manager fit, culture, performance management programs, family pressures, and the like. They are usually out of the employee’s control.

    Thus, before you can evaluate overall job performance, you have to first isolate what you want to measure.

  35. Dr. Charles Handler – our I/O Psychology consultant – does our validation work. He and I will be discussing the validation behind Performance-based Hiring at ERE’s Expo in March in San Diego. There is a summary whitepaper available on our website – – and a copy in Hire With Your Head.

    Here’s a link to an ERE article I wrote on how to define quality of hire –

    The basic idea is to define exceptional performance before hiring the person using a series of performance objectives developed by benchmarking exceptional performance. During the interview all candidates are asked to describe in detail comparable examples of similar work. This evidence is then used to rank the candidate on our 10-factor talent scorecard. The focus of this assessment is on performance trends over time, intrinsic motivation to do the work at peak levels, situational fit between the manager and new hire, the ability to influence and work with comparable teams, consistency in achieving comparable results, problem-solving and thinking skills, and cultural fit. In all cases, the assessment is in comparison to real job needs.

    I’ll be presenting much of this information at the upcoming ERE event, as well as SHRM’s SMA conference in Orlando in April, and their annual conference in San Diego in June. At all of these events I describe my concerns with the traditional behavioral interview, but really focus on the role the recruiter plays in increasing assessment accuracy while maximizing quality of hire. Balancing these often competing objectives with hiring managers who don’t have enough time and great candidates who want a lot of hand-holding is a challenge recruiters face every day.

  36. Dr. Charles Handler – our I/O Psychology consultant – does our validation work. He and I will be discussing the validation behind Performance-based Hiring at ERE’s Expo in March in San Diego. There is a summary whitepaper available on our website and a copy in Hire With Your Head.

  37. Re: quality of hire. Here’s a link to an ERE article I wrote on how to define quality of hire – it’s a bit different from the I/O approach, but we’re getting a lot of traction and success with it:

    The basic idea is to define exceptional performance before hiring the person using a series of performance objectives developed by benchmarking exceptional performance. During the interview all candidates are asked to describe in detail comparable examples of similar work. This evidence is then used to rank the candidate on our 10-factor talent scorecard. The focus of this assessment is on performance trends over time, intrinsic motivation to do the work at peak levels, situational fit between the manager and new hire, the ability to influence and work with comparable teams, consistency in achieving comparable results, problem-solving and thinking skills, and cultural fit. In all cases, the assessment is in comparison to real job needs.

    I’ll be presenting much of this information at the upcoming ERE event, as well as SHRM’s SMA conference in Orlando in April, and their annual conference in San Diego in June. At all of these events I describe my concerns with the traditional behavioral interview, but really focus on the role the recruiter plays in increasing assessment accuracy while maximizing quality of hire. Balancing these often competing objectives with hiring managers who don’t have enough time and great candidates who want a lot of hand-holding is the challenge recruiters face every day.

  38. Sounds a lot like the critical incident technique combined with behavioral interviewing. Absolutely nothing wrong with that approach, but not sure how it is a measurable improvement over the classic behavioral structured interview?

  39. Bryan – you may very well be right (it seems closer to the behavioral consistency model), but we only use two-questions to do it. This is why it has such high user adoption rates – 90% or better. Candidates are also more engaged because their discussing accomplishments related to the work they’ll be doing. Clarifying expectations up-front is a key to improving performance and on-the-job satisfaction (Gallup’s First Break All of the Rules).

    Also, during the interview we instruct recruiters and hiring managers to look for voids and gaps in the candidate’s background. These gaps – if reasonable – then are used to present the position as a career move, taking compensation off the table as the person’s primary decision criteria. Done properly top candidates, especially those who are passive and/or have multiple opportunities, sell the company as to why their capable to handle the bigger move. This way recruiters don’t have to sell the candidate, increasing the conversion rate every step of the way.

    In addition to formalizing the company’s yes/no process using a 10-factor talent scorecard, we also provide recruiters with a multi-factor career decision matrix to enable candidates to make a career-based decision about the company. This is one way we’ve integrated the assessment process into the recruiting process. Collectively, the idea is to design every step from sourcing to interviewing to closing with the idea of maximizing quality of hire.

  40. After seeing how much interest has been shown in this topic I feel compelled to add my point of view. As some of you don’t know me, I’ll offer a little background.

    I started recruiting in the late 1990’s during the tech boom, ran a search practice for a couple different boutique search firms (and quickly identified the need for improving the predictability of on the job performance, similar to Lou I needed to minimize the number of claims against a 12 month guarantee). I learned a lot about job analysis, competency modeling, and assessment and eventually joined SHL, a leading global provider of assessment solutions. I now run my own boutique human capital consulting firm.

    To directly address Lou’s question, the answer depends on the accuracy of the current interviewing process and the accuracy of the job description. Assuming the job description is accurate (e.g. it includes the KSAs that are directly linked to performance and none that are not) and the current interviewing process has less than a .4 correlation with on-the-job performance then improving interviewing accuracy will improve QoH. I say .4 because of all the research studies I’m aware of .4 is the highest correlation I’ve ever seen between interviews and on-the-job performance.

    Now, if you want to know if you should be looking at ways to improve interviewing accuracy just do a correlation study with your past hiring data and on-the-job performance. If you are at or very close to a .4 correlation investments in this area may not payoff.

    It should go without saying that if the job description does not include the correct KSAs improving interviewing accuracy likely does little to improve QoH because you were hiring for the wrong thing in the first place.

    I believe a more productive discussion for all of us would be do define all the factors that influence quality of hire, of which interviewing accuracy in only one. Discussing, and agreeing, on a definition of QoH would be great too (although we should probably avoid opening that can of worms for later)!

    I’ll begin by saying that I believe the first factor to affect QoH is the accuracy of the job description. I’ll add that in my experience, with over 150 executive searches and consulting to several Global 1000 companies on leadership assessment and selection, less than 50% of hiring managers and boards fully understand what it takes to do the jobs they hire for! They are usually mostly right and sometimes dead wrong. When I develop a job description I analyze the hard and soft skills a person must possess, which includes ‘managing up’ and ‘coachability’, what motivates them to perform at their highest possible level, the acceptable and unacceptable cultural behaviors of the hiring company, and management style of the hiring manager. Even with all this I don’t expect to capture 100% of what predicts on the job performance as I know there are external factors and some element of chance involved.

    The next factor that affects QoH is sourcing. If you have accurately defined the job, but your sourcing efforts do not target 100% of the qualified candidates your QoH will suffer. In my experience, unless your directly sourcing candidates you don’t come anywhere close to identifying 100% of qualified candidates. Probably more like 5-30% depending on economic conditions, but I have no scientific evidence to back that up. A long time ago I changed my method of sourcing so that I could estimate the population of qualified applicants using a target list of companies by geography, sales, industry, and average annual turnover and switched to direct sourcing methods to avoid spending countless hours screening unqualified resumes. I now come close to or exceed identification of 90% of the estimated qualified candidates for every executive search I do. This type of targeted sourcing is probably the best way to improve QoH because it significantly increases the number of qualified candidates in the candidate pool.

    The next factor that affects QoH is selling skills. Once you have identified a qualified candidate (that is not already aware of and interested in your position) it takes good selling skills to break through the clutter of all the other things on the candidates mind to get them to pay attention to your opportunity, and it takes great selling skills to convince the most skeptical candidates to pursue the opportunity. If you don’t have great selling skills you won’t get the candidates that believe they have great prospects in their current position to consider another opportunity! This is not to say that you won’t find a great candidate, some of them are looking and some are unemployed. But, for those who are looking or unemployed you may not need to be as strong in selling skills to get them interested. Selling, by the way, shouldn’t stop after the candidate expresses an interest. It should continue through the entire process, and is not just the recruiters responsibility!

    The next factor that affects QoH is assessment, but it affects it the most when more of the candidate pool is qualified to begin with. I prefer to use some combination of interview, work styles, motivation, cultural and ability assessments. If you believe in the research done in this area, and I do, you know that an interview alone has at best a .4 correlation with performance, which means that only 16% of a candidate’s predicted on the job performance is accounted for in this method. Using a combination of methods like the combination I favor, you can reach the .63 correlation level seen in some of the studies of assessment methods, and which accounts for 40% of a candidate’s predicted on the job performance. For those who don’t think it is worth it, the work styles, motivation, cultural and ability assessments can increase the ‘cost of assessment’ by as much as 100%, but the increase in predictability of on-the-job performance could be 250% or more. If you know your current correlation of your assessment method and the expected correlation of a future assessment method all you need is the financial value of performance and you can determine if the extra cost is worth it.

    The next factor that affects QoH is offer closing. Once you have identified those candidates that are more likely to be high performers you have to close them on an offer. This is where the relationship between recruiter and hiring manager and conversion skills that the Recruiter Roundtable report references come in the most. Even if you can identify and get qualified candidates interested it means nothing if you can’t close them. In my experience the best recruiters in this area have exceptional negotiating skills (they don’t shy away from this, they live for it!), in-depth knowledge of everything the hiring company can offer and the ability to articulate it in terms of how it will impact the candidate’s life, and in-depth knowledge of what motivates the candidate.

    As to the weight to assign each factor, I think they are all equally important. If I am right, then interviewing accuracy only counts for 20% of QoH.

    What do you think? Did I miss something?

    I hope we can advance this discussion toward an outcome that benefits all of us. Improving the ROI from human capital offers the greatest opportunity to improve organizational performance and profits, and improving QoH is a big part of this!

    CorDell Larkin (@cordellco)

  41. CorDell, you nailed it. Great post!

    It’s interesting that those who get paid by placing people and obtain repeat business based on how well they improve quality of hire have a different point of view from those who are focused on increasing assessment accuracy.

  42. I don’t know that you can place people quite so neatly into those categories. I think most/all of us here are focused on improving quality of hire, and most of our jobs depend to some (or a great) extent on how well we do that. My background is in assessment but I certainly wouldn’t describe my goal as “increasing accuracy.”

    The distinction is probably more tied to each person’s education/training and their outlook. Do you view talent management as primarily a function of job analysis? Sourcing? Assessment? Branding? Supervision? Retention?

    I’d like to think that most of us have a “systems” approach that takes into account all of those factors and more.

    Again, this is not a zero-sum game. Haven’t we grown beyond seeing our field as simply different “camps” but rather people with different areas of focus that complement one another? Perhaps not.

  43. “Aside from managerial fit, we find intrinsic motivation to do the actual work involved as a key driver of performance and job satisfaction.”

    Hi Lou,

    Much as I am a fan of David “Mr. Motivation” McClelland, it is my understanding (and I will be happy to be corrected) indicate that General Cognitive Ability is generally thought to be a better predictor than motivation for job performance.

    Also, I’d like to add my support to Bryan, Gunnar’s, and Tommie’s advocacy of a more scientifically/peer reviewed- based approach to stating what works. As a suggestion: perhaps ERE commentators who make claims should be required to state the source of these claims, and if they have any particular financial incentives for advocating their position. As an example: I am trying to sell myself as a capable fact-based consultant/strategist who says “what you need to know as opposed to what you want to hear” so that hopefully I will be invited to many paid opportunities to share my views which increases my consulting business.


    Keith “Self-Motivated and Honest About It” Halperin

  44. Keith – a few quick points
    1) certainly hiring a motivated incompetent would be a mistake, however, hiring an unmotivated competent person wouldn’t be much better. Of course, a baseline of performance must be a highly competent person. I assume this as a given, so I totally agree with your initial statement. But without motivation to do the actual work required you’ve missed the mark. Nothing I’ve seen would refute this.

    2. While I support the scientific/peer validation aspect of your view, I believe the I/O folks are not considering all of the science available, for the sake of protecting their turf. That was the whole point of the original article – to broaden the science available, not discard it. What I would like to see is proof that companies are actually performing better as a result of using BEI and I/O stuff. I have never seen any science around this. Where is it? While most companies would suggest that the reason they’re outperforming their peers is due to better talent, I have not seen any that would publicly state that the reason was due to using BEI. This is where I totally support the work of the Recruiting Roundtable and Leadership IQ. This points out recruiter skills, understanding the job, and quality of the hiring manager as the primary drivers of improving talent.

    3. I wrote about Performance-based Hiring in my book Hire With Your Head (Wiley, 2007). I’ll be surprised if you find anything in there that has not been validated or doesn’t work exactly as described. It even includes a legal whitepaper and I/O summary by Dr. Charles Handler. My claims are public and open to inspection. We even have case studies of companies whose bottom-line performance has improved using the process. I don’t think it’s the claims that need substantiating, it’s how one views their world or understands the data. If you’re interested, check out this Isaiah Berlin’s Hedgehog vs. Fox commentary (Wikipedia). It seems to fit the comments:

    Berlin expands upon this idea to divide writers and thinkers into two categories: hedgehogs, who view the world through the lens of a single defining idea and foxes who draw on a wide variety of experiences and for whom the world cannot be boiled down to a single idea

    4. What surprises me is that many so called “scientists” use research to defend the status quo, rather than improving their craft or looking at the problems from another perspective in order to solve them.

  45. Thanks, Lou. First of all: a full and speedy recovery to your brother. That kind of thing can be frightening, frustrating, and enraging….

    Re: your work- If I understand it properly, it seems like you’re trying to have it both ways: have your work be more scientific: “We will be *funding a PhD program to further validate these results.” while at the same time decrying the scientific methodor or at least the way you perceive scientists use the scientific method: “What surprises me is that many so called “scientists” use research to defend the status quo, rather than improving their craft or looking at the problems from another perspective in order to solve them.” Which way is it going to be, Lou?

    As I see it, if you want to claim that what you have done is scientifically/peer-review valid, then offer to let some neutral, unpaid folks test and submit it for peer review, and let the chips fall where they may. That way it can be proven (if valid and true) and not just anecdotally-claimed. Otherwise, I suggest you say your work is valid based on your organization’s many years of successful experience, which (as I have said before) can be valid and true, just not scientific.

    I’m not on the I/O,O/D, or the assessment side- I want to find out what works. PERIOD. I believe (that as Gunnar implied) that perhaps there’s an inherent “fuzziness” to hiring that can’t be completely eliminated- a sort of “Uncertainty Principle of Hiring”, and IMHO that’s where the good recruiter comes in- effectively dealing with non-quantifiable, non-verifiable, highly-subjective areas. That’s the fun stuff for me….


    Keith Halperin

    *In my mind that taints the neutrality/objectivity.

  46. Keith – In the 457 ERE articles and assorted comments I’ve written over the past 10 years, I don’t think you’ll find one mention of me not advocating a scientific approach to validation or approach, it’s just not the same old science some of those in the I/O field still advocate. We’ve thrown a little physics, six-sigma, SPC, and behavioral economics into the mix. If you’ve read Hire With Your Head and/or talked with a single person, recruiter or company that has used it, you’d realize its value and impact, so I suspect you haven’t done this. It’s not rocket science, just plain old common sense based on how the best people look for work, accept offers and work at peak levels. The reason I’m not a big fan of BEI is because it hinders this goal and we’ve found far better ways to assess competency that is far more respectful and more accurate. For one thing answers can’t be faked and there are only two questions. (We also advocate 100% validated assessment testing, reference checking, drug testing, and complete background verification for all finalists.)

    The objective of the book was to offer a scalable business process for maximizing quality of hire, not just increase assessment accuracy, although this sub-objective has been met. Due to the broader scope, this is what I would like to have formally validated via a PhD study – similar what Collins did with Good to Great – to validate the findings of the 100 plus companies that are now using it. (Note: companies fund their internal validation studies, so this is not unusual.)

    Also note, Performance-based Hiring is being used right now and has been for 20 years. The first edition of Hire With Your Head came out in 1997 and was based on 10 years of findings. Surprisingly, no one thought it was superficial or ineffective. It’s not in its third edition because it doesn’t work, it’s just different than what the typical I/O folks advocate, and far simpler to implement.

    Here’s the link to Amazon for the book – – Buy it and if you don’t think it’s scientific enough, I’ll personally refund your $20. Note: I haven’t found anyone who discredits the process after they’ve read the book, but if they did, I would like to understand why and modify the process accordingly. This is a standing offer I make to everyone.

  47. Perhaps the disconnects b/w the recruiters and I/Os may reflect a lack of consensus around what is meant by QoH. My sense is that all of the commentators are looking at the same car, but they’re seeing it from different angles.

    Wendell and Cordell did a nice job describing factors that contribute to QoH … they also acknowledged the difficulty of actually defining it. Therein lies part of the problem. On the hand, I believe the I/Os are advocating good science that is based on clear, operational definitions, rigorous methodology, and statistics (“There are three kinds of lies: lies, damned lies, and statistics.”) to link directly an assessment (e.g., BEI, GMA, or Personality) and an outcome that is important to a company. This approach is often narrowly focused in order to be able isolate the impact of a predictor and to minimize the impact of extraneous factors that might contaminate the results; in fact, the EEOC is particularly sensitive to situations where companies are using tests based on research that is contaminated by unrelated variables. On the other hand, Lou et al. seem to be promoting a more macro-level approach that is focused on linking the impact of each factor that contributes to QoH (cf. Cordell’s response) to job performance AND broader business outcomes.

    Rather than being mutually exclusive, the two approaches described appear symbiotic. A critical task is to find a way to balance the competing demands of finding enough of the right people, creating defensible screening processes, and evaluating the impact of a placement in terms of performance, tenure, and overall business impact. In fact, the study that it sounds like Lou is commissioning may well be a good first step in helping to create much needed confluence between the two streams.

  48. Thanks, Lou. Perhaps I’m being unclear and/or I’m misunderstanding you. Your comment:“What surprises me is that many so called “scientists” use research to defend the status quo, rather than improving their craft or looking at the problems from another perspective in order to solve them” seems to me to be criticizing scientists for how they operate, i.e., “the scientific method”. If that’s not what you mean and are instead saying some scientists are rigid, narrow-minded, and dogmatic, I certainly agree with you there.

    If I sound like that your methods do not work, I am not saying that. I believe they DO work- you and your team are very good at what they do, and have a great deal of experience doing it. What I AM saying is that it hasn’t been scientifically validated by a peer-reviewed study.
    If I am wrong (or when it occurs), please let us know.
    Until then, it is anecdotal and not scientific. If you are not claiming it to be scientific (but probably valid nonetheless), then I agree and have misunderstood you.



  49. Keith

    The definition of scientific validation is different than peer reviewed. From a peer review process, I would state that 100% of recruiters who have used Performance-based Hiring would totally support the results. This same group of “peers” would totally discount BEI. I would also state that 100% of hiring managers would agree the results are exactly as described, and this group would further malign BEI as not useful and counter-productive.

    Re: scientific. If you mean A vs. B control group studies, correlation studies of accuracy vs. OTJ performance, than the answer would also be yes. These studies were funded by the companies that implemented the process using their I/O group and supported by in-house legal. As mentioned earlier, there are references who can support these studies, and many are included in the book.

    I think the standard of peer-reviewed is too low a standard, especially if the peers are of a single-mind and unwilling to change. Interestingly BEI is peer reviewed and scientific, but only captures 40% of the variance. Hopefully, this isn’t your definition of scientific validation.

  50. If there is any interest left in this topic, here’s a point of quality of hire, which Brent and CorDell have described.

    I just left (today, 2/22) a company in San Diego. We put together a performance profile for a production manager. The first question was – what would the person need to accomplish to ace the performance review? We then went one to create 7 performance objectives that defined quality of hire. During the interview the hiring team will get detailed examples of comparable accomplishments and rank the person on a 1-5 scale using our 10-factor talent scorecard (email me if you’d like a copy). This is our pre-hire quality of hire measure. Throughout the year, this same form will be updated to track actual performance to predicted performance. Correlations on these measures are quite high, closer to .7-.8 when the rating team has agreed plus or minus .5 on the 1-5 scale for each of the 10 factors when the person was first interviewed. When the initial rating is wider than this the correlations to actual performance drop. Rating control is the key to assessment accuracy. Confirming this on a larger scale would be the basis of a more public study.

  51. Thanks, Lou. Yes, it is my definition (, and yes it is flawed, but like democracy it may be the worst of all systems except for all other systems. Also, since you regard it as “too low a standard” (though used by tens thousands of academics, scientists, and engineers at thousands of institutions worldwide), then I’d think there’d be no problem in submitting PBH for peer review- if it passed, it would add to its credibility, and if it didn’t pass, you’ve already discounted peer review’s value…
    Have you considered Open Peer Review ( which is supposed to fix some of standard Peer Reviews problems:
    “In 2006, a group of UK academics launched the online journal Philica, which tries to redress many of the problems of traditional peer review. Unlike in a normal journal, all articles submitted to Philica are published immediately and the review process takes place afterwards. Reviews are still anonymous, but instead of reviewers being chosen by an editor, any researcher who wishes to review an article can do so. Reviews are displayed at the end of each article, and so are used to give the reader criticism or guidance about the work, rather than to decide whether it is published or not. This means that reviewers cannot suppress ideas if they disagree with them. Readers use reviews to guide what they read, and particularly popular or unpopular work is easy to identify.

    Another approach that is similar in spirit to Philica is that of a dynamical peer review site, Naboj.[24] Unlike Philica, Naboj is not a full-fledged online journal, but rather it provides an opportunity for users to write peer reviews of preprints at The review system is modeled on Amazon and users have an opportunity to evaluate the reviews as well as the articles. That way, with a sufficient number of users and reviewers, there should be a convergence towards a higher quality review process.
    In February 2006, the journal Biology Direct[25] was launched by Eugene Koonin, Laura Landweber, and David Lipman, providing another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, then the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers.[26]
    An extension of peer review beyond the date of publication is Open Peer Commentary, whereby expert commentaries are solicited on published articles, and the authors are encouraged to respond. In the summer of 2009, American academic Kathleen Fitzpatrick explored open peer review and commentary in her book, Planned Obsolescence, which was published by MediaCommons using Commentpress, a WordPress plugin that enables readers to comment on and annotate book-length texts.”

    Also, your San Diego work seems very sensible and practical.



  52. Lou – the matching of assessment and performance measure (predictor and criterion) is exactly what I described in a comment on a separate post–that’s really the way to raise the correlation beyond what we see in a typical study. The reality is that in most organizations there is a mis-match (for better or worse) between the KSAs measured in the assessment and follow-up measures of performance; this is definitely an area for further discussion.

    One problem with a behavioral approach, however, is when you’re hiring for a job where previous experience (even comparable experience) is highly unlikely (usually entry level or jobs that require extremely rare skills) or when what you’re really after is raw potential rather than experience. That’s where cognitive/learning ability and the other factors you’ve talked about (e.g., P-O fit) become much more important as predictors.

  53. Bryan – thanks for your post. It’s obvious you’ve been there, and I totally agree with your comments. The lighter the person the more the need to rely on behaviors. On a graph of behaviors vs. performance, past performance becomes a better predictor of future performance as experience increases.

    We helped the YMCA hire 100,000 camp counselors, 16-17 year olds, and after visiting 10 camps found the core behaviors for success were reliability, planning ahead vs. reacting, proactive concern for younger children, coaching, and physicality.

  54. Hi Lou,

    Isn’t the premise that “past performance becomes a better predictor of future performance as experience increases” valid only if the future behaviors are the same/similar to past behaviors? If that were the case, then outstanding sales reps should make outstanding sales managers, which is quite often far from true. Therefore, ISTM that the more a position relies on new skills, the more a behavior-based approach (as opposed to a past-performance based one)would be indicated. In simple terms: if you want someone do the same thing they’ve done before, see what they’ve done. If you want them to do things new to them, find out who they are….

    Your thoughts, folks…


  55. Keith – you’re logic is backwards on the sales rep vs. sales manager piece. Somehow you’re not getting the idea. Past performance as a sales rep selling similar products, to similar customers in similar channels, etc. would be the best predictor. Sales reps aren’t managing at all, so it’s a huge risk. Past performance as a sales manager managing a similar group of sales reps selling similar products, similar channels, similar process, similar customers, would be a good predictor.

    You lost a lot of credibility with that last post.

    Even Schmidt and Hunter prefer past performance over past behavior. For example, being highly motivated as a sales rep doesn’t imply the person would be highly motivated – or competent to be a sales manager, accountant, etc.

  56. Lou, I’m afraid I have been misunderstood again, and that may be why you’re attacking my *credibility. What I tried to say is that if someone is a great sales rep, the KSA they have acquired in becoming one may not be the same as those necessary to be a great sales manager. Therefore, it would make a great deal of sense to use this person’s past performance as an indicator for another sales rep position, but if they are up for a sales manager position, those may require many different KSAs to be successful, which the person may or may not possess. Success/aptitude in one field does not necessarily translate over into another field. You can be highly motivated to succeed and fail miserably. I do not know what the best predictor of performance in a new area is- it sounds like it might be behaviors as opposed to performance, but show me some peer-reviewed evidence to the contrary, and I’ll believe that.



    * I LOVE it when the attacks to credibility, character, honesty, etc. begin…

    Guard your honor. Let your reputation fall where it will.

    Lois McMaster Bujold, “A Civil Campaign”, 1999

  57. Keith – feel free to attack my credibility, I don’t mind. But I don’t have a clue what you’re talking about. It’s either me, or your questions. (Wendell attacks my credibility all of the time, and I can deal with it.) If you’re going to raise questions, the quality of the questions are subject to question. So you need to take the heat. Remember the quote about the kitchen.

    If you’re somehow inferring that a great sales rep wouldn’t necessarily be a great manager, you’re right. It’s so obvious, it’s a mute issue, so why discuss KSAs at all? No one would dispute your premise.

    Attend the great debate and ask your questions. We have Dr. Tom Janz and Dr. Charles Handler on the debating team, and I’m sure they’ll give you the info you need.

    Regardless, I always tell people to not believe what any of the experts say, including me. Instead try out some of the ideas. If they don’t work 90% of the time, you have every right to toss it aside.

  58. Keith – now I know why you’re asking your question – you sell a competing product. Why didn’t you just say that, rather than beat around the bush?

  59. 1. Lou Adler Feb 24, 2010 at 6:39 pm
    Keith – feel free to attack my credibility, I don’t mind. But I don’t have a clue what you’re talking about. It’s either me, or your questions. (Wendell attacks my credibility all of the time, and I can deal with it.) If you’re going to raise questions, the quality of the questions are subject to question. So you need to take the heat. Remember the quote about the kitchen.
    Lou (or anyone else), feel free to call my questions, premises, and logic into question any time you like- that helps me improve. “Credibility” to me sounds more like a bit of myself, and not my ideas…
    If you’re somehow inferring that a great sales rep wouldn’t necessarily be a great manager, you’re right. It’s so obvious, it’s a mute issue, so why discuss KSAs at all? No one would dispute your premise.
    If no one disputes the premise, why is management in a given field often/usually given as a reward for exceptional sole-contributorship?
    Attend the great debate and ask your questions. We have Dr. Tom Janz and Dr. Charles Handler on the debating team, and I’m sure they’ll give you the info you need.
    That could be interesting. What’s the topic, Lou?
    Regardless, I always tell people to not believe what any of the experts say, including me. Instead try out some of the ideas. If they don’t work 90% of the time, you have every right to toss it aside.
    Lou, that’s part of the problem we have. IMHO, if the “experts” are willing to have their premises, logic, and practices carefully and continually scrutinized in a neutral, objective way (the way scientists, engineers, and academics should do) and they hold up, we SHOULD trust them. Your products, practices seem to hold up (even though I interpret that they’re anecdotally-based with lots of cases), so we should probably trust them. We SHOULDN’T trust someone just because they have a good presentation, effective marketing, and prestigious clients, as a number of your ERE colleagues do.
    Keith – now I know why you’re asking your question – you sell a competing product. Why didn’t you just say that, rather than beat around the bush?
    Lou, I do not have any product to sell, except perhaps “me”. I am trying to portray myself as an experienced recruiter who has seen a great deal of good, bad, and indifferent recruiting practices and hopefully has enough breadth and depth over 20+ years to help design, implement, and improve recruiting based on “what works” (formally validated or just anecdotal), and not on what the current fad or the ingrained prejudices of the “powers that be” are. That’s my pitch.


  60. A friend of mine suggested I check out this thread. Wow, quite the debate. Full disclosure, I didn’t read every comment in great detail, but here is my overall reaction as a knowledgeable assessment developer who has not been in this debate to date.

    Its somewhat pointless to argue for or against assessments in a general sense. Whether assessments add value to the hiring process depends on multiple factors including:
    1. What kind of assessment? There is a big difference between structured interviews, drug screens, and ability tests. Most assessments can work in some situations, but no assessment technique works all the time everywhere.
    2. The kind of job and nature of the organization.
    3. The number of applicants that have to be processed and hires that have to be made.
    4. The skills of hiring manager and recruiter.
    5. The budget of the hiring company.
    6. The number and nature of the candidates. Using the same assessment methods for a Fortune 1000 CEO candidate and a frontline hourly employee is unlikely to make sense.

    In keeping with the “self promotion” theme of several earlier comments, here’s my plug. I discuss a lot of the concepts in this debate in a book published by Pfeiffer/SHRM called “Hiring Success: the art and science of staffing assessment and employee selection”. This book has been favorably reviewed by both assessment academicians and staffing practitioners. Anyone who wants a more thorough discussion of the relative value of assessment techniques may want to check it out.


Leave a Comment

Your email address will not be published. Required fields are marked *