The Financial Impact of Not Hiring the Least-best

DollarSign2_000The financial gain of hiring A-level talent is probably 10-100 times the person’s compensation.

The financial cost of hiring a walking lawsuit is probably 10-100 times their compensation.

Assuming the duds and the stars represent 10% of your total hires, it’s what you do with the other 90% that really matters.

To get a sense of the enormous financial impact of shifting people from the bottom half into the top half, first categorize the 90% into three big buckets — the Best, the Not Quite Best, and the Least-best. Based on these definitions they should be of equal size:

  1. The best, or upper-third. These people represent the foundation of your company or organization. They work hard, frequently exceed expectations, do more than asked, achieve high-quality consistent results, can always be counted upon, need little direction, never make excuses, work extremely well with everyone, and can take over projects even when they have less expertise than normal. As a result they get promoted at a faster rate. If your hiring process is flawed, you won’t have enough of these people to grow your company.
  2. The not quite best, or the middle-third. These are the partially competent. Generally they’re strong technically, but missing a key ingredient or two. On the other hand, they get the job done with limited direction, can be counted on in a crisis, work reasonably well with others, and get promoted when there’s no one else around, but they’re generally not the first choice. Sometimes they get hired because they seem safe.
  3. The least-best, or the bottom-third. These are the people who just don’t fit somehow. Sometimes they’re good people in the wrong jobs. They need extra coaching and supervision to achieve average results. Often they cause unnecessary conflict. They are often hired because they interview well, are enthusiastic and affable, and have the requisite experience. If this group represents more than a third of your workforce, you have a real problem.

What’s surprising about the middle and bottom groups is that when they were hired they all seemed fully qualified. They all had the right experience, the right academics, and the right skills. Many of them even had the right behaviors and competencies specified on the job description. However, something happened after they were hired that caused a great many of them to underperform. This is typically due to lack of interest in the work, weak relations with the hiring manager, lack of team skills, poor cultural fit, and inconsistent work habits.

The cost of hiring these least-best people is enormous. To gain a rough sense of this first calculate the average profit per new employee (APE) for your company by multiplying your revenue per employee (RPE) by a reasonable estimate of contribution margin. This is probably around 30-40 percent.

Article Continues Below

So if your RPE is $400,000, your APE at a 30 percent margin is $120,000. This means that on average, each new employee should generate $120,000 in pre-tax profit if as a group they’re doing work similar to your existing employees. Of course, this doesn’t take into account different jobs and different salaries, plus a lot of other missing stuff, but it’s still useful for making a point.

Now assume the top group is 20 percent more impactful than the middle group and the least-best is 20 percent less impactful than the middle group. This means the top group generates $144,000 each in profit (120*1.2) and the least-best group generates $96,000 each. The difference between the top group and the least-best group is therefore $48,000 in pre-tax profit per person per year. This means that for every person you replace in the bottom-third with a top-third person you’ll make an additional $48,000 profit. If you do it 100 times, you’ll earn $4.8 million in additional pre-tax profit. (Here’s a summary graph of this for a number of companies ranging from Goldman Sachs at the high-end to government contractors at the bottom.)

Using this macro-level analysis, it’s pretty easy to justify implementing a top-third hiring strategy. Pulling this off, however, is not that easy, and there are no silver bullets, other than being best-in-class. So if you’re not in this category and if your supply of candidates is far less than your demand, you’ll have to reengineer most of your processes to hire the top-third. Here are some quick ideas on how to get started.

  1. Sourcing. The top group doesn’t look for new jobs the same way the bottom group does. Most likely they initially heard about your openings through a referral, recommendation, or by networking with a recruiter or current employee. If this is the case, it makes sense to expand these efforts and minimize efforts on those channels that attract the bottom-third. Before eliminating job boards altogether, however, find out what kind of messaging appeals to the top-thirders. Generally this involves career growth and learning opportunities, challenges, and the chance to make an impact. Write some ads emphasizing these points, use search engine optimization to make sure they’re found, and track the performance of different major and niche boards. Then keep the ones that actually attract the-third. Of course, in the future, make every new sourcing vendor prove they can attract the top-third. The key to all this is to use top-third decision-making and consumer marketing to drive your sourcing channel strategy, not some new vendor of the week.
  2. Recruiting. In general the least-best are looking for jobs and those in the top-third are looking for careers. (This is probably the primary reason for variations in on-the-job performance, too.) Looking for a job is much more transactional than looking for a career. Those looking for a career have more informational needs and questions than the bottom-third, yet they’re often overlooked in the rush to start posting boring jobs and arranging interviews. To allow more top-thirders into the process a new and formal information exchange step needs to be inserted before the application process. Good recruiters do this naturally, but formalizing it would increase the number of top people involved by widening the top end of your prospect funnel. At the back end, you can increase the number of top-third hires by providing finalists a score sheet that allows them to formally evaluate your opportunity across a broad range of short and long-term criteria (e.g., growth, learning, impact, team). (Email me if you’d like a sample form.)
  3. Assessing. While you can eliminate most mistakes using traditional behavioral interviewing, discerning the differences between the top and bottom thirds takes more investigation. Being fully qualified doesn’t mean being fully motivated, or being able to work in your environment, or working well with the team and hiring manager, or being well-organized, timely, committed, consistent, or flexible. These are the typical causes of underperformance, and ignoring them is a setup for failure. To determine if someone is in the top-third I suggest creating a performance profile to define real job needs, digging deeply into comparable accomplishments using the one-question interview, and using the 10-factor talent scorecard to collect and assess the evidence.

From a practical standpoint it’s unlikely you could ever successfully implement a hiring process to hire only the top 5 to 10 percent. However, hiring the top-third is a reasonable and sustainable target. It would certainly raise the talent bar at an insignificant cost and a huge ROI. It all starts by letting the needs and decision-making criteria of the top third drive your sourcing, recruiting, and interviewing processes. As far as I’m concerned, being in compliance doesn’t mean being ineffective. In most cases, this is just an excuse to maintain the status quo.

Lou Adler is the CEO and founder of The Adler Group – a training and search firm helping companies implement Performance-based Hiring℠. Adler is the author of the Amazon top-10 best-seller, Hire With Your Head (John Wiley & Sons, 3rd Edition, 2007). His most recent book has just been published, The Essential Guide for Hiring & Getting Hired (Workbench, 2013). He is also the author of the award-winning Nightingale-Conant audio program, Talent Rules! Using Performance-based Hiring to Build Great Teams (2007).

Topics

71 Comments on “The Financial Impact of Not Hiring the Least-best

  1. Good Article. My only comment Lou is do you promote that organizations should strive to hire A-level talent in every position in the business?

  2. Rick – I don’t think the top-third is A-level talent. This is probably the top 5%.

    I think the top-third is comprised mostly of strong B-level, B+ and A- level people. This type of mix to me is very achievable. These are really the people who make a company run successfully. It’s the average B-level and below that cause most of the problems.

  3. I think to re-phrase my question, while the A-level talent may end up being the top 5% of any business, should organizations strive towards hiring A-level talent in every role, whereby A-level talent may command more pay, faster career progression, etc. – or, as in talent planning and segmentation exercises, some roles are simply not that critical to support the hiring of A-level talent. Or is the financial gain as you state with A-level talent always the better strategy.

  4. I think the A-level should be mandatory for critical positions – those that affect the company’s strategy or bottom-line performance. For rank-and-file positions I believe a top-third target is more realistic. This way, you’ll wind up with a few top people who will be promotable quickly.

  5. Lou, another great article, as usual. I appreciate that your thought process does not rely upon matrixes, and minutae. Rick, really great questions and comments. I will add my .02 to see what you guys think.

    1. Everyone should realize that “A” players were not always such. If you hire Beethoven, Einstein, or Gates, then you hired an original A player. For the rest of us, we aspire to become such, and usually start as B or hi-C players. Evolution creates the A player.

    2. Because of #1, A players are seldom your best hire. That does not mean you should ever pass on an A player, but should certainly consider the aspects Rick correctly detailed. Far better for most positions to hire a B player, or a hi-C with a strong assessment and desire to learn and grow. If you do your part, those will become the A and B players in your future, and will likely be more thankful for the growth and opportunities, and hence more loyal to you as an employer.

    3. Lastly, and the one practice so few companies have the fortitude to embrace, is that there needs to be a well-organized, qualitative analysis and force ranking of all performers, annually. Sadly, most companies carry their bottom 10% performers eons after they have identified them as “weak”, or “needs help”. Each time I have been in a position to do so, I have promoted that bottom 10%’ers get thoroughly evaluated, receive specific, quantifiable, and attainable goals for improvement, a workable plan for achievement, and a supervisor’s committment to work with the employee for their success, and to thoroughly document progress. If the goals are not achieved in a specified time, they are gone, PERIOD.

    Think about the implications of such a process. It is surely a wake-up call to all who are not in the top 30%. The people who rank in the bottom 20-40% get a blindingly stark vision of their future in the company if they don’t improve. The 40 to 70% people realize that you are not going to replace the departed employees with bottom-feeders, which possibly jeopardizes their own safe standing as strong new replacements are hired. Perhaps most important, the folks in the top 30% are now energized by your committment to quality and performance, and less likely to leave because they recognize the value of what you are doing.

    If this practice is applied consistently, honestly, and without partisanship, there is no way it can fail to invigorate a company.

    Fire away!!!!!!

  6. Excellent Article!! A must read for all in the hiring industry. This article speaks volumes to those who dont believe in hiring and retaining performers!..

  7. Lou, Lou, Lou. Once again, words pulsing with enthusiasm, wit, and drive but the theatrics don’t serve to inform or convince as broadly as they could. At least not for the ‘some of the people’ that you want/need to impact.

    From the top then. Let’s just focus on your first declaration for now– “The financial gain of hiring A-level talent is probably 10-100 times the person’s compensation.” Quite a range there, Lou. Then there is that tell-tale word ‘probably’. What is that about? It reveals the intuitive-experience vs. evidence-based foundation for everything else that follows.

    As I have said before in response to your insightful but imprecise arguments, the path on which you tread is not rocket science, but it is people science. Anyone familiar with the normal distribution can tell you that the expected value of a top third performer is 35% (not 20%) higher than that of a middle third performer. Why not just either learn the science or work with a scientist and get it right?

    Then you could drop the word ‘probably’ and at the very least, have a basis in evidence for your confidence in the value of your recommendations. I have commented before on how very often your intuition drives to conclusions/ recommendations supported by science. I suggest you could have greater impact with the knowledge market present in the Global 1000 by learning some ‘new science tricks’. You may be a senior canine and not have the appetite, but you are way too smart to be unable to do it.

  8. Dr. Tom – while I have enormous respect for your work, this wasn’t a scientific article, the title might provide a clue to this. It was really an article about developing a strategy focused on hiring the top-third. This could be a forest vs. trees issue, for some, but the financial implications for hiring the top-third are huge, whether it’s 25% or 35%, even if it’s only a 10% difference between thirds.

    The real critical points of the article have to do with how you source, assess and recruit/close the top-third in comparison to the bottom-third. Focusing on the so called “scientific” aspects of hiring and ignoring what happens on the front-lines might be the reason most companies haven’t been able to consistently hire the top-third. In my opinion, more emphasis should be put on developing science based on how the top third engages, decides, and accepts offers.

  9. Fair enough that the critical thrust of the article has to do with differentially treating the top, middle, and bottom thirds– which makes great sense. Then why not just make that assertion? Why dress up the motivation to do so with psuedo-science that detracts from the value of your message? I hope that researchers follow the valuable advice you offer in the last sentence of your comment to me.

    However, your response to my comments strays from the canine to the feline when you say “Focusing on the so-called ‘scientific’ ascpects of hiring … haven’t been able to consistenly hire the top third.” I could slice, dice, chop and/or shred that sentence six ways to Sunday, but I admire you too much and I would most likely not change minds in any case. And I think Dr. Williams has the official curmudgeon role on ERE all tied up.

  10. Tom – please do the dicing and slicing! The point is that there will always be a bottom-third, and the continual focus, coupled with continuous process improvement, has to be on how you raise the talent bar.

    Hiring managers in the bottom-half have great difficulty hiring the top-half. How do you get around this? How do you get around the idea that the top-third is looking for a career not a lateral transfer, yet most companies offer lateral transfers? Please provide any insight that suggests companies are doing this as a strategic thrust other than Amazon. As part of this also prove that behavioral interviewing and competency modeling does anything more than eliminate most hiring mistakes, but can’t differentiate between the thirds. The psuedo-science you refer to actually is real science, just not your science, so I think you’re totally off-base with that comment. PS – I think you and Dr. W should actually compete for the curmudgeon role, it will be a great battle.

  11. Very interesting article thank-you Lou. While I agree with the sentiment, if you’ll forgive me, the word ‘probably’ also came screaming out for me.

    While logic and intuition and the bedrocks of many conversations, ‘a priori’ arguments are weaker than those that are substantiated. In fact, I would be interested if Dr Tom Janz could provide advice on the best sources of data that me, and other readers, could use.

  12. Thank you Ryan. First, the sources you request. Then to the slicing and dicing that Lou requests. Might as well get the work out of the way before starting into the fun.

    Here are two sources that cover a lot of what is known. First, the widely cited 1998 Psychological Bulletin article by Frank Schmidt and John Hunter: The Validity and Utility of Selection Methods in Personnel Psychology:
    Practical and Theoretical Implications of 85 Years of Research Findings.
    For something more recent, try the 2008 volume of the Annual Review of Psychology, chapter on Personnel Selection by Sackett and Lievans. Neither are to be confused with a Harry Potter novel but both do the heavy lifting around what we know about hiring more of the top third and less of the bottom third performing candidates.

  13. Ryan/Tom – “probably” was probably a bad choice of words. In this case “somewhere” would have been better suited. Regardless, the point of the article was to eliminate the top and bottom 5% from the discussion since they don’t represent the bulk of any firm’s hiring activity. Tom contends that instead of a 20% assumption as the difference in performance between the top third and average 35% would have been more scientific. However, even if this was more precise, the use of APE – average profit per employee – as a the basis for determining the financial cost of shifting a person from the bottom-third to the top-third seems more controversial, yet nothing was mentioned about this. Coupled with this was the fact that I used some gross estimate of incremental contribution margin to calculate APE, was left out of the criticism. This concept of calculating the lost opportunity costs this way seems much more questionable than taking a conservative assumption regarding the performance differential between the top and bottom thirds.

  14. Now on to the slicing and dicing that Lou requested. Readers should realize that Lou and I are in violent agreement around the need to increase the hiring of top third and reduce the hiring of bottom third candidates. What I objected to was Lou’s attempt at a zinger when he typed ‘Focusing on the so called “scientific” aspects of hiring and ignoring what happens on the front-lines might be the reason most companies haven’t been able to consistently hire the top-third.’

    Focusing on “science” is hardly the reason for the huge dollars left on the table after candidates who accepted their offers show up for work. “Science” would mean validation research, and very few companies do that. Anyone care to disagree on that point?

    So here is the real reason, and it isn’t pretty. The solution Lou offers, Performance Interviewing, is a 2-5% solution at best. What do I mean by that? I mean that even though I agree that Performance Interviewing offers the best shot at maximizing top-third hiring for the candidates that make it to the short list— the short list is just 2-5% of the eligible candidates that apply, these days. To make a real impact on increasing top-third hiring, you have to impact the decisions made on who gets onto the short list. That is why PeopleAssessments.com partnered with Dr. George Paajanen to offer among the most successful screening tests created, and designed the BIO (Behavioral Interview Online). Screening tests measure behavioral potential and the BIO measures past performance, with answers auto-confirmed by credible third parties. So we put BOTH potential and performance to work to create the golden short list. And we do this for hundreds of eligible candidates— all of them— for the same cost of testing the top 10 using traditional online assessments (that costing $5-90 each).

    So I leave it for readers to come to their own conclusions. Stick with traditional resume sorts and screening interviews (which we KNOW from hundreds of studies result in low decision power (less than 20%) and then launch into Performance Interviews (decision power around 50%) for the 3-5 on the short list OR test and BIO all the eligible candidates (decision power at least 40%), Video Interview the top 10, and interview the top 2 in person guided by a Personalized Interview Kit. Let’s see: 2-5% vs. 95-98%. Who’s gonna win? My money is on us. Care to put up, head to head, Lou?

  15. Dr. Janz,

    Wouldn’t your system be counter-productive in short-supply fields, where top talent must be attracted from the ranks of competing organizations?

    How would you get top talent, and not just the eligible job hunters, to take the time to complete your selection process?

  16. Scientific or otherwise, the point is the segmentation of people will always be there.
    Say, one organization’s bottom is better than top layer of it’s competition. I am curious to know what would be the management’s thought process in such case. Does it still worry about bottom layer? Does such thought process by any chance lead to un-due work pressure on bottom layer? Where does one stop in effort to convert bottom to top (there by ending up having new bottom)?

  17. Tom – the top-third is the top-third, not the top 5%. Top-third means 33.3 percent of the firm’s total hiring. The top 2-5% are the A+ level folks. The top-third represents the solid B level people and above. This is a very realistic goal to achieve. This was the whole point of the original article – setting up a strategy to hire the top-third and some ideas on how to do it. Putting in an assessment test too soon is guaranteed to lose these people. Let’s let some science around this and you’ll discover the validity of this commonsense idea.

  18. Dennis: It has been mathematically proven that the utility of candidate screening falls to 0 as the number of candidates per opening falls to 1. When a short-supply condition leads to poaching the competition, screening is no longer valuable and performance information carried by known achievements and reputation will exceed that which can be collected from candidate screening. But don’t expect it to come cheap.

    Som: Great points, but Jack Welsch turned this argument around to suggest that the bottom 10% at GE could be well suited for other, less-demanding organizations and one is doing them a favor by helping them ‘find their level’ earlier, rather than later. Convincing?

    Lou: I have heard the tired argument that the really top performers won’t take assessments for years. Well I am sure that some won’t, but it is far fewer than you suggest. If you were right, many of the world’s top brands would be devoid of top talent, since all highly leveraged performers in those organizations face rigorous assessment of one form or another. And those with established performance reputations that can be confirmed without their taking an assessment, don’t need to. It is, as you said yourself, the 90% of hires (not candidates) that matter, not the top or bottom 5%.

    My point was that be screening 90% of all respondents with valid, practical, online measures of potential and performance, firms can realize the biggest boost in top third hires and the greatest reduction in bottom third hires. Leaving it to a powerful measure of performance for the last 2-5% of the candidates leaves it too late. To put the point at its sharpest, it is all about the long list vs. the short list. I argue that one needs to apply the best science and practice possible to both.

  19. Dr. Janz,

    did I correctly understand your answer to mean that you do not believe the rigorous screening you outlined would not be appropriate to direct search situations?

    “Poaching” does involve long and short listing… what it does not involve, of course, is enmass response to advertising.

    Are you saying that such rigorous screening would apply best to situations where one is obtaining mass response to job postings from active job hunters?

  20. Denis: Rigorous screening adds value as long as there is talent to evaluate whose performance value cannot be accurately judged from information either in the public domain (i.e. on the internet) or well known among a small group of specialized professionals.

    Offshore drilling platform managers is a small club where a “Market Master” as defined by Jeff Kaye can develop an intimate knowledge of virtually the whole talent pool. If the performance information does not already reside on the internet, training on Performance Interviewing, applied informally to phone and email conversations or formally to screening interviews, would accumulate all the performance differentiation needed to sort out the top, middle, and bottom thirds in such a restricted talent pool.

    However, the ‘long listing’ you describe, given a larger talent pool where performance information is not so universally accessible, raises the likelihood that rigorous screening adds value. Making that rigorous screening as pleasant and engaging as possible remains an important objective for the assessment provider. We (PeopleAssessments.com) have just released a Strengths Inventory(tm) that taps 8 core Strengths Factors with just 33 items that consist of six positive adjectives per item. While some prima donnas may feel that their repuation should preceed them, and they “don’t have a dozen minutes to confirm their strengths”, then I wonder what else they don’t have time to do in the service of their egos?

  21. Tom – I don’t know where you got your stats but most are flat wrong, especially about PBI. I know you believe I don’t use statistical science on these things, but Dr. Charles Handler just did a huge study for us covering over 1500 hires (with one client) and discovered that Performance-based Interviewing resulted in hiring the top-third 75% of the time! The company implemented a very vigorous top-third process with high standards, the development of custom interview guides, the use of a performance profile, and an group evidenced-based assessment process. No only did performance increase, but turnover dropped by half.

    As part of this the company modified their pre-hire assessments process to ensure that no one was eliminated who was a top-third person. Charles can’t tell you who the client is, but he certainly can tell about the success. The company dropped DDI, because it didn’t work.

    Trudy Knoepke Campbell had similar results at HealthEast Care Systems. Call her for more on her approach.

    In my search firm, we placed 1500 professionals – staff to GM – over 20 years and kept the stats. Our fallout rate – people before one year for any reason – was less than 8%! 90% of these people were all in the top 10%

    Tom, you need to map your statistics to reality before jumping to conclusions. The prima donnas deserve to be that. I’ve made a lot of money placing them, and it you offer careers instead of jobs, you’ll discover they’re remarkable people who will spend time with you.

  22. WOW Lou! 1500 hires. That’s a big number! But only one study. That’s a small number. Maybe you can help me with how 1500 hires compares with the 140 million candidates that have taken Dr. Paajanen’s tests. Or the over 240 studies conducted on their effectiveness. Or the research published on multiple variations on the Patterned Behavioral Interview. Give me a break, Lou. You are so imbued with reaching intuitive, gut feel conclusions dressed up with some numbers you pull from air, you wouldn’t know science if it hit you in the face.

    I will check with Charles on that one study, as I am all about getting the skinny on what works. But I get my statistics from the articles I cited in the comments above. How you get your numbers, I am just not sure.

  23. Tom – it sounds like you’re trying to sell your product rather than be rational. For one who believes in “science” you’ve pulled a lot of statistics out of the air without any basis in fact.

  24. Lou,

    It seems to me that you and Dr. Janz have two very different approaches to solving the same problem.

    Correct me if I am wrong, but you seem to propose a people-oriented solution that digs into candidate past accomplishments using standardized interviewing… only using behavioral, IQ, and similar testing to confirm a good hiring decision has been made.

    Dr. Janz seems to advocate online testing and video interviewing to eliminate bottom-third talent upfront, so that a final personality decision can be made between two equally qualified applicants.

    I tend to lean towards a more personal approach. Unless someone is very active in looking for a new job, I cannot fathom the screening process, as outlined by Dr. Janz, as being a positive experience. I personally would attempt to avoid the hassle – given a chance – by networking my way into the career. The more bureaucracy, the less likely I would want to work there. That has nothing to do with being a prima donna, it’s just I and a lot of people like me think.

    That said, testing has a place in the process… just not so heavy on the front end. That and a few bucks will buy you a cup of coffee…

  25. Dennis: That is probably because while you have lots of experience with the ‘personal approach’, you likely have lots more experience with the traditional, lengthy, dry, and regimented world of paper tests put online than you do with a short, powerful, engaging screening assessments. There are more and more of the latter showing up now as simulations (US Army site) and Virtual Job Tryouts (Starbucks and others developed by Shaker Consulting Group). Speaking of coffee, I heard that the Starbucks Virtual Job Tryout for managers bought them $22,000 per year of increased profitability per store— quite a few bucks I would say. Worth the effort? I would say so.

    Lou– Sigh!I decided NOT to back off to bluster this time. I sort of thought we might end up here, but I tried to play nice, hoping we could avoid it. I have heard of the pot calling the kettle black, but it has never applied quite so well as now. I can add no more clarity to the points already made and you seem to be in no mood to consume more in any case. What is clear is that you are sidestepping the empirical challenge I issued earlier in favor of good ol’ fashioned mud slinging. I won’t go there. Best of luck to you.

  26. Hi Folks,

    I was acquainted with the Schmidt & Hunter article that Dr. Janz had mentioned before, and i’ll put a link to a discussion of it.(http://bobsutton.typepad.com/my_weblog/2009/10/selecting-talent-the-upshot-from-85-years-of-research.html)

    Here are some sections that the commentator added:

    ======================================================

    “It is always dangerous to say there is one definitive paper or study on any subject, but in this case there is candidate — a paper I have blogged about before when taking on graphology (handwriting analysis). But there is one article that just might qualify. It was published by Frank Schmidt and the late John Hunter in the Psychological Bulletin in 1998. These two very skilled researchers analyzed the pattern of relationships observed in peer reviewed journals during the prior 85 years to identify which employee selection methods were best and worst as predictors of job performance. They used a method called “meta-analysis” to do this, which they helped to develop and spread. The advantage of this method is — in the hands of skilled researchers like Schmidt and Hunter — is it reveals the overall patterns revealed by the weight of evidence, rather than the particular quirks of any single study.

    The upshot of this research is that work sample tests (e.g., seeing if people can actually do key elements of a job — if a secretary can type or a programmer can write code ), general mental ability (IQ and related tests), and structured interviews had the highest validity of all methods examined (Arun, thanks for the corrections). As Arun also suggests, Schmidt and Hunter point out that three combinations of methods that were the most powerful predictors of job performance were GMA plus a work sample test (in other words, hiring someone smart and seeing if they could do the work), GMA plus an integrity test, and GMA plus a structured interview (but note that unstructured interviews, the way they are usually done, are weaker).

    Note that this information about combinations is probably more important than the pure rank ordering, as it shows what blend of methods works best, but here is also the rank order of the 19 predictors examined, rank ordered by the validity coefficient, an indicator of how strongly the individual method is linked to performance:

    1. Work sample tests (.54)

    2. GMA tests …”General mental ability” (.51)

    3. Employment interviews — structured (.51)

    4. Peer ratings (.49)

    5. Job knowledge tests (.48) Test to assess how much employees know about specific aspects of the job.

    6. T & E behavioral consistency method (.45) “Based on the principle that past behavior is the best predictor of future behavior. In practice, the method involves describing previous accomplishments gained through work, training, or other experience (e.g., school, community service, hobbies) and matching those accomplishments to the competencies required by the job. a method were past achievements that are thought to be important to behavior on the job are weighted and score

    7. Job tryout procedure (.44) Where employees go through a trial period of doing the entire job.

    8. Integrity tests (.41) Designed to assess honesty … I don’t like them but they do appear to work.

    9. Employment interviews — unstructured (.38)

    10. Assessment centers (.37)

    11. Biographical data measures(.35)

    12. Conscientiousness tests (.31) Essentially do people follow through on their promises, do what they say, and work doggedly and reliably to finish their work.

    13. Reference checks (.26)

    14. Job experience –years (.18)

    15. T & E point method (.11)

    16. Years of education (.10)

    17. Interests (.10)

    18. Graphology (.02) e.g., handwriting analysis.

    19. Age (-01)

    Certainly, this rank-ordering does not apply in every setting. It is also important to recall that there is a lot of controversy about IQ, with many researchers now arguing that it is more malleable than previously thought. But I find it interesting to see what doesn’t work very well — years of education and age in particular. And note that unstructured interviews, although of some value, are not an especially powerful method, despite their widespread use. Interviews are strange in that people have excessive confidence in them, especially in their own abilities to pick winners and losers — when in fact the real explanation is that most of us have poor and extremely self-serving memories.”…
    ============================================================

    If I interpret these correctly (and they’re actually accurate) then assessment tests are #6 most effective of 19 methods, and behavioral interviewing is #10 of 19.

    The guy says (again if he’s correct based on what is here):

    “The upshot of this research is that work sample tests (e.g., seeing if people can actually do key elements of a job — if a secretary can type or a programmer can write code ), general mental ability (IQ and related tests), and structured interviews had the highest validity of all methods examined”

    Keith comments:

    1) Except for occasionally programmers- I’ve never seen work sample tests being done. Why not? Too hard to set up?

    2) How can you make an unbiased, nondiscriminatory GMA test? i.e., “What’re the names of some?”

    3)I’ve never been in a place where they administered an integrity test. Why not? Is it considered undignified for professionals?

    4) How is “structured interview” defined, and how does it differ from unstructured and behavioral interviews which don’t work as well? Is it just knowing what questions to ask ahead of time?

    Your thoughts….

    “Keith “Just the facts, ma’am” Halperin

  27. Keith – I found your posting and questions VERY useful. I would be interested in the answer to #1 as well.

    Dr. Janz – I am not questioning whether your assessments are useful or carry an ROI. I believe assessments are necessary.

    My questions were derived from your idea to test all applicants up front, video interview the top 10, and then conduct a personalized structured interview for the top 2

    VS

    Creating multiple talent channels to find top performers, screen using CVs and a quick performance-based telephone interview, use a structured performance-based interview process for the short list, and assess to confirm the results.

    According to the posting from Keith, the performance predictability would be very similar between the two approaches. But, I see a difference between screening out the bottom third versus actually attracting the top third to begin with. I tend to believe that top people are under-represented in the active job market, and those that are in the active job market search for jobs in a different way than average performers.

    I would assume that leading with testing followed by video interviews would cause a negative emotional response in applicants – much like having to fill out a job application online (unless they are desperate, the vast majority just don’t do it… according to observations from Monster.Com… hence the auto-population of information from CVs).

    Do Virtual Job Tryouts (VJT) involve 1) front-end assessment to screen, 2) video-taped interviews, followed by 3) standardized testing?

    I would like to know more about VJTs….

  28. Keith and Dennis
    Great posts that pose meaningful questions.
    First an update. The comment made to S&H 1998– “The upshot of this research is that work sample tests (e.g., seeing if people can actually do key elements of a job — if a secretary can type or a programmer can write code ), general mental ability (IQ and related tests), and structured interviews had the highest validity of all methods examined” is accurate as printed in the article. A recent closer examination reported in Sackett and Lievans (2008) Annual Review of Psychology: Personnel Selection– finds that the Hunter and Hunter (1984) reported value for work sample tests was in error, and should be closer to .34. So tests of General Mental Ability and structured interviews are the top remaining predictors of job performance across all reported research.

    To Keith’s questions:
    [1] Job sample tests (such as typing tests) are used for routine work (sewing machine operators) and are de-rigeur in silicon valley selection at Google, Yahoo, Net Apps and others where candidates solve problems on a white board while the technical team observes. I developed one for a foam extruder operator in the automotive parts industry. That plant made head-liners for cars. They are expensive to create and norm, when done scientifically. Not so much when creating white-boarding exercises in silicon valley.

    [2] There is no such thing, since there are mean differences on the criterion as well as the predictor across protected classes. One can reduce the bias using culture free items and focusing on simpler components of GMA, such as short term memory. One study by a McDaniel student found less than half the bias for a test of short term memory with almost the same job performance correlation. For reasons that baffle McDaniel, that finding has not seen wide deployment.

    [3] Most integrity tests are of the “admissions” variety and were created for hourly restaurant/hospitality/retail for the most part. Only those lower in GMA admit illegal acts, even if only slightly illegal acts, on an employment test. Thus admissions-based integrity tests don’t work all the well for high GMA professions. Restriction of range. Personality-based measures of integrity, such as the EI originally offered by PDI (now in the hands of Previsor) has shown the largest correlations with white collar crime among incarcerated felons.

    [4] Patterned behavioral interview ARE structured interviews. I am not sure where this attempt to differentiate behavioral interviews (as in those developed by DDI, PDI, Green, and Lominger– as well as PeopleAssessments.com) and structured interviews comes from. There are some articles on the “highly structured interview” but when those interviews focus on situational or hypothetical behavior, they don’t predict job performance as well as past performance interviews and don’t add much beyond GMA where past performance interviews do.

    Now for Denis:

    Fist, the S&H 1998 article does NOT suggest that weak screening techniques such as the alternatives you suggest (with population validities well below .2) will deliver the same overall expected hire performance value as valid screening methods (validites in the .4-.5 range) that I assure you (200,000 cases processed by us; many more by others) people will complete, particularly these days.

    Second; don’t assume. Data from over 6000 phrama sales rep candidates and 12,000 restaurant manager candidates confirms that candidates overwhelmingly were MORE positive about the online assessment methods over the techniques you offered as the traditional alterntive. I can email you the summary graphs if you wish.

    Sorry for the lack of pleasantries. It has been a hard day’s night of wrestling, but I am not complaining and I thank you for your thoughtful questions.

  29. Dr. Janz,

    Thank you for the reply; yes the graphs would be of value: dennis.kline@arcor.de 🙂

    While I appreciate all the thought you have put into your answers, you are still arguing the validity of assessing- which I already agree with.

    My question is of a different nature and is focused on the real world problem of balancing the art of attracting top talent with that of screening them.

    My comments were sparked by your statement that:
    “test and BIO all the eligible candidates (decision power at least 40%), Video Interview the top 10, and interview the top 2 in person guided by a Personalized Interview Kit. Let’s see: 2-5% vs. 95-98%. Who’s gonna win? My money is on us.”

    While you criticized the process I tend to favor as weak and invalid, you are in error. Let me explain:

    The initial conversations with potential candidates are designed to proactively find top talent and then spark their interest in a career opportunity, while only screening for obvious mismatches in terms of personal goals and job match.

    Human prospects like talking to other humans in order to explore an opportunity BEFORE taking tests and filling out forms. Tests and forms cannot answer questions. Résumés and telephone screening gives a potential candidate the chance to discuss life goals and work motivations. This helps the candidate understand the value of the career chance in question.

    Following this process with valid screening through standardized interviewing and assessment therefore makes sense to me. The validity of the standardized interviewing and assessment is well in excess of 0.2 you gave it.

    I do not believe there would be a difference between your approach and my approach from this point on. However, I would argue that the initial process I outlined would result a greater sample of top people from which to chose.

    If you argue that assessment should take place before standardized interviewing… fine, I see no problem with that.

    But, you give me the feeling that you believe that the purpose of collecting resumes and speaking with candidates on the phone is a waste of time. Nothing could be further from the truth if one understands why one is collecting resumes and speaking with candidates on the phone.

    Because I am questioning the real world application of what you proposed (assess, video, interview), your discussion of the 6,000 reps and 12,000 managers is a step in the right direction to help me understand your statement.

    Did these candidates all follow the steps you outlined as 98% effective (assessment, video, interview), or were they simply referred for online testing at different points in their selection process by your corporate clients?

    The data you are citing – 12,000 managers and 6,000 reps, I have questions about:

    1) 12,000 and 6,000 out of how many?
    2) Did they all complete the online testing?
    3) Do you track the number of people who left the site without taking the test?
    4) Do you track the number of people who decided NOT to go to your site to take the test?

    My point is that if your sample only includes the job hunters who decided to take the test, it is not an accurate measure

  30. Sorry with the last post, I have an unfinished thought.

    … it is not an accurate measure of candidate satisfaction with being presented with a test to begin with because it does not include the sample who refused to take or finish it.

  31. I think everyone here has gotten the age old HR problem of missing the forest for the trees. That was a major point of the original article (I think). I’ve contended in recent articles (I’m positive), that companies inadvertently do things to eliminate the best candidates from consideration too soon. What Tom proposes does exactly that, even if what he proposes is an accurate measure of on-the-job performance. Which it probably is, but without science to back it up, it might not be the best measure.

  32. Tom has made an interesting point about using science to validate selection criteria. I totally support this view. Here’s an interesting scientific article that suggests that there is less science than most people believe in traditional behavioral interviewing.

    Understanding the determinants of employer use of selection methods
    By Wilk, Stefaniel L,Cappelli, Peter
    Publication: Personnel Psychology
    Date: Tuesday, April 1 2003

    http://www.allbusiness.com/labor-employment/human-resources-personnel-management/11432669-1.html

    Virtually no research has looked directly at characteristics of the work itself as a predictor of selection practices.

    Two recent studies, however, have focused primarily on the organizational-level selection decision. Terpstra and Rozell (1997) surveyed organizations on their use of different methods of selection (e.g., cognitive tests, structured interviews) and found that the reasons for not using a particular selection practice varied based on resource constraints, legal concerns, industry, and the knowledge of the human resources professionals in the firm. Based on these descriptive results, they argue that selection in organizations is not scientifically performed and call for additional research on selection practices at the organizational level.

  33. Dennis
    While I admire the confidence you place in your personal conclusions (i.e. ‘While you criticized the process I tend to favor as weak and invalid, you are in error’), we can all be in error when offering opinions, me included. That is where the findings can be handy.

    On to addressing your points. First, there are clearly some people who would prefer to start out with a conversation, but if you have observed the younger generation, you could not help but notice how often they prefer to text, email, IM, and surf— even when in the same room! I suggest that we begin engaging talent with a text/graphical or video tour of the role, including opportunities and obstacles new hires face on that role. Then we ask quick eligibility and Ideal Work Environment questions, giving candidates feedback on how this role appears to be aligned (or NOT) with their needs/preferences. THEN we congratulate eligible candidates (that meet the advertised position minimum requirements) and invite them to begin a more in-dept online assessment. Sometimes, we include a Personal Style profiler that provides them immediate feedback and coaching on how they can be more effective within their style type. So we should genuinely work hard at engaging them with meaningful, informative processes that avoid soaking up needless labor cost. My apologies for not pointing all this out earlier, but I have been a bit occupied lately.

    So I agree that providing candidates information that guides reasonable self selection decisions and engaging candidates in ways that help them are important objectives. I have seen no evidence to suggest a positive return for ‘schmoozing on the phone’ as the only or necessary way to do that. If you have some, please pass it along.

    Second, all the evidence I have seen on the accuracy of the unstructured phone screen as a predictor of future talent value puts it no higher than a population corrected .19. Again, if you are aware of research that has escaped my attention, please pass it along.

    Finally, the results of our online candidate satisfaction questionnaires do not include those who abandoned the assessment. However the assessment abandonment rates we found in trials where the candidates were invited (so we knew how many were asked to complete the assessment) have been as little as 9% and as much as 23%— and one hardly wants everyone to complete the assesment. Otherwise, what would be the point of a realistic job preview?

    So I believe we share the same objectives, but reach different conclusions about how best to achieve them. One thing is relatively certain given the known findings– online information/engagement followed by valid online assessment with a good behavioral interview at the end costs way less and delivers substantially higher expected job performance compared with a schmooze and interview strategy. That’s just the way it is, at least based on what is known vs. preferred.

    Lou
    Lou, as usual, I couldn’t agree more that it is a forest and trees problem and that “organizations do things to eliminate the best candidates from consideration too soon.” The findings clearly show that HOW they do that is to rely on subjective, unstructured resume sorts (all noise and no signal), personal sifting questions (applied to resumes or quick phone convrsations) that have the effect of weeding out people who are not like me, and unvalidated personality trait questionaires that have validies in the single digits and teens when applied on applicants vs. incumbents.

    I also violently agree that dry, boring, long, repetitive assessments will turn off top talent who always have other options. The challenge, which I believe is being tackled successfully by a number of ‘new age’ assessment providers, hopefully including us, is to get the measures that predict job performance using methods that engage (short, positive, fun, feedback-rich). Different forest. Different trees. Same goal. Make sense?

  34. Lou
    Thanks for the Wilk and Capelli citation. I read it. I do things like that. It tackles the issues around how organizations have historically selected selection programs. Very interesting, but unrelated to the literature that examines which selection methods work best under what conditions (validation research).

    Here is what they concluded:
    “Likewise, there are implications for practice. Organizations should examine the various characteristics of work and from that consider the information most relevant and useful to that type of work in designing a selection process. For example, if the work has greater cognitive demands, then tests and academic achievements may be more useful. The more challenging the work or ambiguous the needed skill set, for example, like those associated with high performance work practices, the more extensive the use of certain types of selection.”

    Well, that does capture how many firms in general have acted when choosing their approach to candidate screening. However there are dozens of top brands in multi-unit retail, grocery, restaurant, hostpitality and healthcare the deploy valid screening to millions of entry-level candidates each year. And field research finds very substantial financial benefits for doing so (see publications from PreVisor and Pearson TalentLens).

    I believe what Terpstra and Rozell(1997) were saying is that organizations largely do not use scientific criteria to pick which selection method to use, and that certainly squares with my experience. I only wish they would. We would both find life easier in that case. Cheers!

  35. Dr. Janz,

    I accept your half-hearted capitulation to the fact that personally engaging potential candidates before assessing them is necessary 🙂

  36. My eyes are starting to gum up over this! 🙁

    I think we have at least two things to consider here and I will try to say them as simply as I can:
    1) There is probably a bell curve determining how much time and effort someone is willing to put into applying for a given job. (I understand this may very greatly depending on the individual, the job, and various other factors, but there should be some studies showing this.)

    2) There is probably a bell curve determining the relationship between the length of time using a given predictive method and its validity. (*Not how long the organization has been using it, but how long it takes an applicant to do it. I hope there is a study on this…)

    I SUSPECT that there may be an inherent tradeoff between the length of time someone is willing to put into using a predictive method and its predictive value; e.g. you could end up with a relatively good predictor (approaching the maximum scores we’ve seen discussed), but it takes longer than most people would be willing to put up with, or you could have a quick and dirty predictor which applicants really like that doesn’t work very well. I’d like to see what might be the “Goldilocks Zone” of reasonable validity.
    I hope at least that we could reach a general consensus on what NOT to do, no matter how well-heeled, articulate, and respected the proponents may be.

    Keith “Gummy Eyes” Halperin

    *This might be useful too. Does the more you use a given method improve its validity/usefullness up to a certain point?

  37. This relates to a point raised somewhere on this list, regarding being able to assess the top-third with online screening. I don’t think it is possible. Tom, what criteria do you use to determine top-third?

    For my clients we actually prepare a list of detailed performance objectives required for on the job success and then determine what the top-third people do differently than the bottom-third. This has NOTHING to do with skills, experience, qualifications, years, industry, personality, behaviors and competencies etc. – every person we consider is fully-qualified to do the work.

    Separating the thirds has to do with managerial fit, drive to achieve the actual performance objectives in the performance profile, the trend of performance over time, fit with the culture, fit with the actual teams involved (we actually define this upfront and assess it during the interview), the ability to handle decision making and problem solving likely to be faced on the job. To make sure we did this correctly we use the same pre-hire assessment of quality of hire to a post-hire assessment using the exact same assessment tool.

    Tom, this is the process I mentioned earlier that has been vetted and validate by OD and legal for all jobs from entry-level to exec.

  38. Holy Digital Dust Clouds Batman! What a maelstrom of opinions!

    Several years ago while serving on the board of directors for the Employment Management Association (EMA) now SHRM SMA, I conducted a survey asking companies if they had conducted an in-house validation analysis – less than 10% stated they had. Contrast this to my second survey with SHRM on the use of objective evaluation methods (write for a copy of SHRM White Paper), stating over 50% of companies were using ability and or work style assessments. (The Charles Handler’s annual surveys report similar results with the trend moving upward.) The conclusion might be companies, while claiming to be different and unique will settle for the research, analysis and scoring models established by other companies.

    Suffice it to say, test publishers may over represent the value of possible results achievable from deploying a test based upon the results documented by other users. In addition, HR/Recruiting professionals are less willing to engage in conducting the business case study called validation analysis. The implications are that without in-house validation, a company deploys a measurement discipline for the business process called staffing based upon the calibration conducted by some third party or even worse, an amalgam of third parties, potentially including their competitors. The outcome is hiring people less suited to company-specific job demands and more suited to the job demands at the companies where the validation analysis was conducted. Using tests validated by others creates a “me too” workforce. That’s not a approach to enhance competitive advantage and create a differentiated workforce.

    Candidate clustering into groups such as the top 5%, or the top 33% are interesting cannon fodder but a distraction from the objective of making decisions based upon job relevant data. Top% based upon what? Shoe size?

    A hiring decision will always be an act of personal judgment. However, every executive knows a decision is only as good as the data behind it. Each company hires its best and its worst from the same decision process. Conduct a Pareto Analysis (80/20) on any performance metric for a population of employees in one job to see the impact of low end variation in the selection process. Staffing process improvement can deliver more value by reducing low end variation. The means taking steps to prevent tomorrow’s hiring decisions from allowing in people similar to the poorest performing candidates that were hired last year. In manufacturing terms these hires would be labeled as defects, causing waste and rework. The single largest challenge here is that less than 50% of companies have candidate evaluation data in a format that can be used for analysis. And, of those companies who do have that data, less than 20% actually conduct the analysis to establish a link between hiring data and on-the-job performance. (write for copy of Turnover Misnomer survey)

    At the end of the day, ego versus evidence is at the core of the hiring decision in most organizations. If you do not have an in-house analysis of interview scores to job performance, or pre-employment test results to job performance, you have an ego based approach to hiring decisions, with a process that is not scalable, not readily transferable, and with little objective evidence to document return on investment.

    Think about it.

    Joseph P. Murphy
    Shaker Consulting Group
    Developers of the Virtual Job Tryout®

  39. I agree with Joseph. I suspect that much related to hiring policies are designed to reflect and support the prejudices of those who originated them. Imagine if it were revealed at a large and prestigious organization that the hiring of thousands of individuals were made on faulty premises, deliberately dysfunctional processes, and false validity predictors, and that the good performers were there despite (as opposed to because of) what was done to hire them?

    How many in positions of highest corporate authority would be willing to admit that they were wrong and that those who suspected/knew they were wrong were doing their jobs by “going along to get along”?

    Here’s an idea: let’s see how many *false premises we can name. Here are a couple:

    1) A high GPA leads to better on-the-job performance.
    2) People who graduate from select schools perform on the job better than those that don’t.

    Your turn….

    Keith “Show Me the Data” Halperin

    *If you can show me meaningful data to the contrary, I’m HAPPY to be proven wrong, and will admit it….Will the defenders of these premises do the same?

  40. I agree with Keith, Dennis and Joseph. That’s why screening on skills and assessment tests inadvertently preclude the top-third of performers from ever being considered. They either opt-out too soon on their own, or they get excluded for the wrong reasons Which was the original premise of the article – a new set of processes need to be implemented that target how the real top performers look for jobs, engage with a company and select one offer over another.

  41. Keith “Show me the Data Halperin”
    I put ‘GPA job performance’ into Google. Lots of articles show up. Here is an excerpt.
    “Grade Point Average. Although GPA has been widely analyzed, the research has produced inconsistent results. Some meta-analyses suggest that grades have relatively low validity as a predictor of job success (Bretz, 1989; Hunter and Hunter, 1984). Individual studies by Ferris (1982) and Schick and Kunnecke (1982) support the meta-analysis findings; these studies found no relationship between grades and performance evaluations.

    On the other hand, Dye and Reck’s meta-analysis (1988) revealed that limited variations of undergraduate GPA (e.g., GPA for the last two years, or for courses in the major field) can be more valid predictors of performance than overall GPA. In addition, Colarelli, Dean, and Konstans (1987); Wise (1975); Harrell and Harrell (1974); and Weisbrod and Karpoff (1968) all found positive relationships between grades and performance.”

    There now. Maybe it should be ‘Keith looking for the data Halperin’. Happy now?

  42. Lou
    Well I agree with parts of what everyone has said and most of what Joe has said. But that alone has nothing to do with the validity of your most recent proclamation: ” That’s why screening on skills and assessment tests inadvertently preclude the top-third of performers from ever being considered.”
    I would like to hear Joe’s viewpoint on that one.

    But I would be even more interested in HOW you (Lou) know that, HOW SURE you are of that conclusion, and how you arrived at that level of confidence. Really.

  43. Re: data of top-third apply rates. 1)Doug Berg from Jobs2Web has this data nailed about apply rates and click through rates. Much of it is on their website. I’ve seen their master dashboard and their tracking candidate behavior at every step in the process. We work closely with Doug on reviewing this data.
    2) if one believes that stronger candidates are not actively looking – especially those with 3+ years experience, one could conclude that their view/apply rate is zero. Most companies don’t let their best people unless the cutback is severe. One could argue that this is false premise, but logic and commonsense suggests otherwise.
    3) surveys of our clients indicate that their strongest people come through referrals, bypassing the apply button completely. Of course, they could be wrong.

  44. Does anyone else but me notice that Lou’s reply does not answer the question asked and/or provides tangential anecdotal observations?

    BTW: Just for shirts and giggles, this post now has 45 comments. This feels like it might be approaching some kind of record. Perhaps the Official ERE Historian can fill us in on the maximum numner of comments, and the mean/SD of the number of comments that articles receive. This wouldn’t likely happen if the issue did not hit on key nerves and have important business impacts.

    Congrats to Lou for giving us all reasons to think and share and parry and thrust— all in good nature of course, even if a bit passionate at times.

  45. Keith,

    Thought-provoking post…
    From personal experience (And I know that that is an impressive sampling), most of what I learned was post-college, which is probably why my paycheck increased substantially a few years AFTER graduation. Maybe those initial low paychecks were a reflection of the poor predictability of my high GPA?

    In any case, I would tend to think that a POOR GPA might be a red flag for good job performance in the field that the person was majoring in… not scientifically validated or anything. But, I’ve never heard of Fortune 500s running out to hire those 2.0 Students, and I am not sure I know something in that realm that they don’t, so I’m not running out either 🙂

    As far as the great performances of those from select universities… two names come to mind: President Bush and President Obama. But, their “outstanding” performances are, again, probably not a scientifically valid indication that your hypothesis is correct 😀

  46. Thanks, Dr. Janz. Here’s the link to the article that he was referring to: http://www.entrepreneur.com/tradejournals/article/13412205.html >
    It was from 1992. It certainly shows that there is ambiguity in whether or not GPA is a predictor. However, since there is clearly ambiguity (or was ~20 years ago- why isn’t there more recent research?), would it make sense to direct your recruiters to only hire people with 3.8 GPAs from “elite and selective” universities, regardless of what the people have done more recently or in other areas?

    Here’s a premise I HOPE is false (but really don’t know), and closer to what we’ve been talking about:
    “Interviews with large numbers of interviewers and/or substantial numbers of interview rounds produce “better” hires than interviews with fewer interviewers and/or rounds.”

    Another (untested as far as I know) premise:
    “Candidates who are telephone sourced and have little or no internet presence are “higher quality” than those who have more internet presence (and presumably are internet-sourced).”

    A final (for now) untested one that I hope is TRUE:
    “Candidates who are recruited are significantly better than those who apply.”
    (What about candidates who are referred-? How do they compare to recruited or applied candidates? They do seem to contribute significantly in many companies’ hires….)

    Keith “Hope Someone Besides Dr. Janz and Me Will Do Some Research” Halperin

  47. Tom – an earlier question was asked which you ignored – how do you measure quality of hire?

    Re: the data – Doug Berg has it all, just call him, he’ll share it with you. Data doesn’t have to be published to be scientifically correct. If you’ve looked at it, your question would be answered. Even Monster published this about six years ago. The conclusion: the top-third doesn’t follow the click and apply mentally you suspect – but can’t prove – they do. What data do you have that they best people – who become the top-third of a company’s hires came through the traditional click and apply process? To me this is at the root of the question. You have shown no evidence that this premise is false, but assume it is. On the other hand when the supply of top candidates exceeds demand, then we’re talking about a different sourcing scenario completely. In this case, the solutions you propose might actually be ok.

    (PS – the only reason I posted this was to break the ERE record. We could have resolved all of this in a 10 minute conference call.)

  48. And to push the record out there another notch. This relates to a string of comments a few posts back.

    Common sense by its name implies two or more people share the same point of view or perspective – hence the common part of it. But I am not so sure about the common sense on this point and quite curious to know how any assumptions could be made about people that did not apply, as you know nothing about them. Under what grounds can we assert any description or evaluation about people who do not submit information for consideration? And maybe we should not care either! Lets chase those dragons in the clouds.

    As for candidates who are allowed to by-pass the apply button, this might suggest an process out of control and be marked by inconsistencies, and as such be non-conforming with the Uniform Guidelines on Employee Selection Procedures. This increases risks and possible assertions of unfair treatment. Having all candidates apply the same way would be the common sense of those interested in a fair, consistent and reliable method.

    Meta-Analysis serves well to document broadly that results can be predictable over multiple instances. It is after all the study-of-studies. So if we take the Schmidt-Hunter 85 Years paper, it shows results can be achieved and when using two or more different measures it gets better, but it also shows what data is available is old school and marginal at best. However, it also states that it may be prudent to 1) have faith and trust in the value objective candidate evaluation may add, and 2) engage the same level of professional rigor and discipline within our own companies.

    We do not perform meta-analysis. We do not speak in broad generalities. We speak with confidence backed by data from in-house validation analysis with each Virtual Job Tryout® implementation. Our clients learn from the evidence of their own experiences. And YES to Keith’s question, with closed-loop analytics assessments can get smarter over time.

    Here is a case study summary that documents some of the candidate flow and measured outcomes that are typical when deploying an assessment which uses Web 2.0 design, conforms to the Uniform Guidelines on Employee Selection Procedures and the SIOP Guidelines on the Development and Validation of Assessments

    Healthcare industry
    About 10,000 incumbents in the job family.
    Approximately 2500 hires are made per year.
    ~ 30,000 applicants per year.
    A job specific Virtual Job Tryout® (multi-method assessment that combines simulated work samples, job knowledge, work history and work style components) was created and rolled out after a concurrent validation.
    Completion time approximately 75 minutes
    Average abandonment rate is about 23%. (Candidates will stick with and complete a fun, cool, and engaging experience).
    90% of candidates state they will refer others based on the positive nature of application experience.
    Interview to hire ratio has been cut in half. 1000s of interviews eliminated
    90 day separation down by 50%

    18 months later we performed a second validation analysis to recalibrate and learn from experience. A population of 500 hires from control (old hiring method) and experiment (Virtual Job Tryout® )groups comprised the data set for a predictive validation analysis. Those associates hired with the old methods (ATS screening questions and interview) and those who scored in the bottom third on the Virtual Job Tryout® were rated and performed in the bottom 20% on EVERY DIMENSION OF JOB PERFORMANCE. As an executive, what might you think about the staffing process when presented with this type of data?

    Staffing professionals who use the rigor and have the discipline to produce the data have a story to tell, a story with economic impact.

    Joseph P Murphy
    Shaker Consulting Group
    Developers of the Virtual Job Tryout®

  49. Joe – I read the Schmidt/Hunter study – in fact I have a hard copy with me at all times (even right now in Toronto, but meta-analysis by itself is not necessarily as valid as one would expect. For one thing, evaluating statistics of statistics doesn’t get at the level of the job, how the pre- and post-hiring performance is evaluated, and who didn’t apply. When one considers the criteria that a top person uses to decide to apply and accept jobs is fundamentally different than what an average person does – read first break all of the rules, top-grading, and good to great, for more on this – it’s more than commonsense, it’s more like gravity. Ignoring this abundance of real evidence – not a priori – seems inappropriate.

    Re: OFCCP,EEO, etc. – there is no reference that the application process has to be identical in any of these laws. This would mean that employee referrals should not be considered as valid since these people have an inside track. I’m now working with the #1 OFCCP lawyer in the country, who suggests that the normal apply process is actually less objective and can be the cause of adverse impact since it ignores people who can perform the work, but who don’t have the exact skills described. Then add diversity into the mix. Most standard job descriptions have racial bias implicit in the criteria described. Companies are always willing to modified their criteria whenever they find a super candidate – frequently the best of them, have comparable but not identical skills. Sometimes commonsense actually is far superior to meta-analysis and statistics. Remember correlation doesn’t imply causality.

    Justifying a worthy approach on its merits is fine. But ignoring the obvious and rationale alternatives is comparable to the right vs. left debate we’re now observing in DC.

  50. “…. #1 OFCCP lawyer in the country, who suggests that the normal apply process is actually less objective and can be the cause of adverse impact since it ignores people who can perform the work, but who don’t have the exact skills described.” As I implied earlier, many/most companies’ processes do just this, as they are based on the biases and preferences of the founders/leaders, as opposed to demonstrably verifiable methods.

    Lou, who actually are “the best”? If you refer to quantifiable revenue-producers, that’s easy to define, and I’m a strong proponent of hiring the very best sales reps you can and treating them very, very well. What about those that either aren’t revenue producers or aren’t quantifiable? How do you rank people in these categories?
    While my temperament would like to see everything/everyone all nicely classified, sorted, and ranked, with most kinds of jobs, it would be very hard to do.

    Cheers,
    Keith “Running Low on Glib Remarks” Halperin

  51. A few years ago, I looked at this scenario with the same desire to ‘slice and dice’ and to ‘consult on the 97 different ways’ to hire the best. The more ‘strategies’ you come up with, the greater share of staffing spend you can secure, right? In a way, it pays to over-complicate matters. #justshootingstraight

    Anyway, I started with a premise – “It’s all about the compensation.” After diving and diving and diving (and diving some more), I returned to square one again:

    “If you want to hire great, then pay great. Period.”

    If you pay garbage, you recruit garbage. Too many companies want to recruit top talent at bottom barrel prices. Sure, maybe you catch an A-hire anomaly in the bunch (because they live 2 miles down the street), but those individuals are not the norm. The only exceptions I can find are non-profits making a difference in the world . . . and the U.S. Marines (they sell culture and inclusion, nothing more, nothing less. They sell the opportunity to “be one of us, the Few, the Proud.”)

    I met a ton of people in grad school who worked for bummish organizations . . . and they knew it. But the bummish org was paying for their MBA (so they were buying the talent for a couple more years), but above all, the talent hung around at the bummish org because they paid the big bucks.

    Fast forward to today. As an Exec Recruiter, guess who’s the toughest person to recruit? Most ERE members know, but if you don’t know, I’ll tell you – the talent already making the big bucks. You want to hire and retain A-level talent? Then be willing to pay for it and leave behind any false realities of slave labor among a world of unicorns and minotaurs.

  52. This is in response to Keith’s comment.

    Below is the first of four performance objectives for a Sr. Product Manager. This is what we use when we initially take an assignment. We then rank all candidates on a 1-5 scale with respect to how well they’ll be able to deliver the objectives to determine what third they’ll be in. This is how we ensure candidates will be in the top-third of the existing workforce.

    We also will consider candidates even when they don’t have the required experiences listed on the job description. This opens up the pool to more highly qualified candidates – including more diverse candidates. By looking for career gaps in comparison to this performance profile during the interview we can then present the job as a career move (sorry Joshua – I haven’t had your $$ problem, since I always present jobs as career moves this way.) Comp does become a problem when the job is a lateral transfer – which is a fundamental problem of using job descriptions.

    Performance Profile for Product Marketing Manager

    1. Conduct a comprehensive review of all new product programs: During the first 30 days prepare an analysis of all new product programs. Include status of product development effort, budget vs. forecast comparison, status of launch efforts, and major challenges and problems. Submit recommendations and plans to prioritize scarce resources to meet critical target dates, including resource needs (people, financial, capital).

    2. Coordinate the development and launch of all the new product lines for the current season: Work with engineering, manufacturing, and marketing to determine status of all products to be launched this season. Identify hurdles and constraints and develop work-arounds as necessary to meet plan dates.

    3. Lead the development of the two-year product plan: Take the lead on preparing the two-year product program due within 120 days, coordinating with product development and engineering. This needs to consist of competitive analysis, assessment of technology trends, and market and consumer research. Include revenue and market share analysis by channel, including segmentation analysis. New products (introduced within 12 months) should represent 20% of total revenue within 18 months.

    4. Conduct a process review of the product development process: New products typically miss initial projections from a time and budget standpoint. Assess the product development process from all aspects and prepare a plan-of-action to re-organize all aspects of this effort. The goal is to have a predictable new product introduction program within 12 months.

    The results we’ve had with this process is exactly as described in Gallup’s huge study on employee performance and satisfaction described in First, Break All of the Rules. Candidates hired this way perform at peak levels, they have increased job satisfaction and lower turnover. (Tom – is their work scientific enough?) In this study it’s clear that their 12 core criteria that increase personal motivation are correct. The first is clarifying job expectations upfront. NONE of the 12 have to do with having the requisite skills and experiences.

  53. If a company pays peanuts, then Jesus, Allah, and Buddha (and perhaps the Great Alien Being himself/herself) can present the position themselves . . . and it won’t matter. Well, maybe a shot at the afterlife would be worth it to some.

    I’m just having fun – this is a good conversation, seriously 🙂 I like to inject a little humor when things get a little hot in the kitchen, but I’m learning from everyone here 🙂

  54. Josh – from a practical standpoint we’ve seen that as long as a company is in the upper third comp-wise AND offers a clear career move they can hire top people regularly. On the other hand if they’re average or below, they can’t hire the best on an on-going basis.

    Regardless, your point that compensation matters as part of a raising the talent bar, is absolutely valid.

  55. Thanks for clarifying, Lou.
    If I understand the following correctly:
    “We then rank all candidates on a 1-5 scale with respect to how well they’ll be able to deliver the objectives to determine what third they’ll be in.”,
    your people subjectively rate/rank the candidates as opposed to using objective criteria, i.e. you say: “based on our expertise and experience we’ve evaluated these people and believe them to be in the top third”. Since you and your people are very good at what they do, it works well most of the time. At the same time, quantified subjectivity ISN’T objectivity- another equally capable group might end up with somewhat different results.

    Cheers,
    Keith “Tries Not To Mix Apples With Oranges” Halperin

  56. Keith – you made an assumption that it’s subjective – it’s not. Instead, we offer specific guidance using our 10-factor candidate scorecard that requires detailed evidence of past performance to justify every ranking. In addition, if there is more the a half point variance among the interviewers more assessment/testing/reference checking is required. You can download a sample of the 10-factor on our website. We believe that a complete series of assessment testing should also be conducted after this performance-based interview to increase the predictive validity of the overall assessment.

    All of these assessments are based on our structured performance-based/behavioral interview process which is fully described in my book, Hire With Your Head and validated by OD and legal. (This was for Tom’s benefit)

  57. In Surprising Support of Lou (for a change)
    There is not such thing as objective truth when it comes to measuring performance value. You can measure ‘collected revenue’ arising from a sales effort, but as Jeff Bezos has said, “When we have a good quarter, is is not because of anything we did that quarter. It is because of hard and smart work we did 3,4,or 5 years ago.” The numbers may be objective, but the conclusions we draw from them are anything but, unless we are satified with misleading conclusions.

    Lou’s ratings, based on carefully guided professional judgment can be much more valuable for getting at true performance differences than a set of objective numbers contaminated by luck or bias.

    And Lou’s use of assessment testing to back it all up—that’s my kind of process where multiple perspectives of potential AND performance lead to a more complete picture.

  58. Thanks, Lou. Let me clarify: I don’t dispute its value- as I said, you and your team are very good at what you do. If you had a very large number of people evaluate the candidate this way, you could say that it is statistically valid, but it’s just your team. I believe that you and your team are doing the evaluating based on your interpretation of what they say, and it’s not objective to me- it’s quantified expert opinion which isn’t the same thing.

    Keith “Likes Things to Be Properly Named and Will Look on The Site To See What’s What” Halperin

  59. Keith – basically this is not my team doing the assessment. This is what much our hiring manager training is all about. We train managers to define on-the-job success, determine the differences between the top- and bottom-third, learn to use our structured performance/behavioral interview, and use our ten-factor scorecard to summarize their efforts.

    It’s validity is increased since we urge teams to debrief collectively and not make a yes/no decision until the team is in close agreement on all ten factors. We also suggest that as part of the training that interviewers grade themselves on each of the ten factors and then prove their rankings to someone else. This is a great learning exercise on the need for evidence rather than intuition to make an assessment. You can download a sample form from our website to try this out. You can also audit module one of recruiter boot camp online if you’d like to gain a sense of this.

  60. I would be most pleased to participate, should Lou be able to tear himself away from one-question interviews, except that I have a previous commitment in Sarasota that week.

    It would be great to find other recruiters who, like Lou, actually want to earn a place at the C-Table by being able to “do the math”, instead of just whining about it in twits (Oh–did I spell that wrong?).

  61. Tom – we’re doing one on March 11th – you’re invited to participate. Also, I’m collecting all of the science on this topic and it seems to reinforce all of my comments, and a few of yours (note productivity isn’t the same as impact). Here’s the link to sign-up – http://budurl.com/agevents2

    Email me and I’ll make you a panelist (however, I have mike control)

  62. Will do, Lou. I will also email you the key scientific articles I am collecting for my trip to Singapore, where the client wants the low-down on all latest “best practice” research.

    Your friend,
    Tom (who needs enemies when they have Tom for a friend) Janz,
    PS Did you take a peek at the Strengths Inventory?

  63. Dr. Janz, I don’t believe Lou is actually a recruiter. He’s a trainer. I’m sure a great one, no doubt . . . but a trainer (and not an in-the-trenches do’er), nonetheless.

    To your comment about “do[ing] the math”, wasn’t it you that gave me the business (no pun intended 🙂 ) with the following comment regarding QOH:

    “Regarding Joshua’s cost of capital comment, one could gain some brownie points with the financial crowd by citing Net Present Value, but it won’t make much difference . . . ” (http://bit.ly/9sWvrL)

    I’m glad to see you have come around to the importance of financial justification. As for me, I still feel the same today as I did then – if you can’t speak CFO-style, you can’t sell internally.

    However (at least to me), “do[ing] the math means going beyond ROI”. Andrew Rudin agrees with the futility of ROI – http://bit.ly/aEGTDc

  64. Sorry folks, but I’ve lost the thread of what we’re talking about here. Iwould suggest that when wemake a claim, *we should indicate if it is:
    1) An untested, unmeasured, subjective opinion (IMHO: it can still be valid)
    2) A tested, measured subjective opinion (as I believe Lou’s claims here are)
    3) A tested, measured objective/scientific study(put a hyperlink to the paper if you claim this).

    IMHO, there’s too much opinion being paraded about as fact here on ERE.

    -Keith

    *Please hold me to this standard, too. -kh

  65. Joshua Well, Lou ran a search firm that made 1500 placements, so there has got to be some recruiter in there. And I have probably been the longest term supporter of financial analysis, going back to a book chapter I wrote with Marv Dunnette in the Hackman, Porter Lawler (1977) book titled: “An approach to selection decisions: Dollars and Sense.” I have a white paper delivered to the Dallas Area IO Group titled: “Beyond ROI: Simulating the shape of talent to come.” You can retrieve it at this link: http://mmorris.www5.50megs.com/daiop/archive.htm

    Keith Wonderful suggestion. BTW, you had asked previously about whether there was evidence for improved accuracy for multiple interviewers. There is. See the Weisner and Cronshaw metaanalysis, 1988, Personnel Psychology.

    Does this conversation have a half life?

  66. Thank you, Dr. Janz. I’m having difficulty getting the actual paper, but from the citations it seems that these folks say that structured behavioral interviews (PBDI) produce better results than unstructured ones.

    This wasn’t exactly my question- I meant to ask (all else being equal), what is the range of optimal numbers of interviewers and interviews to get both good and practical results?

    In the meantime, I found an interesting paper
    THE CONTRAST EFFECT IN A COMPETENCY BASED SITUATIONAL INTERVIEW
    http://docs.google.com/viewer?a=v&q=cache:5G2mw2XgK54J:https://dspace.lib.cranfield.ac.uk/bitstream/1826/3958/1/Contrast_effect-SWP2-04.pdf+%22Weisner+and+Cronshaw%22+personnel+psych+meta-analytic&hl=en&gl=us&pid=bl&srcid=ADGEESgMV1M0js78u-t4Q6O_ynMFv62XuXHlwtTeRK9T7jLUu9pSrZ0NBHlO9nZl238Xsa5hBGymFFTez7J3RNgwBu0L2_LMkrpQlegaN-VmODiKM6aVTwLO_ePmD4_dFYKsqPFFV11s&sig=AHIEtbQthrI_ECUQU_vrCIiDdZaWW7ZgEg

    This basically says that there’s a bias which causes interviewers to rate any given candidate better if there are previous poor candidates and worse if there are previous good candidates, which can produce some incorrect decisions. However, the authors says that you can minimize this through various steps.

    Cheers,

    Keith

  67. Keith – based on my experience with over 10000 interviews it seems that 3-4 interviewers is optimum. The idea here is that using a structured process with interviewers sharing their information using our 10-factor scorecard – this is the only tool I have – we have seen on-the-job performance soar. Based on 1500 placements over 20 years we have had less than 15% underperformance in one year, and less than 3% annual replacement rate. We actually predicted this, so I think the tool is still valid, but the person was hired due to other pressing needs.

    Also, note that predicted on the-job-performance correlates directly with the variance among the raters for each of the 10 predictors. As a result of this we suggest that when the interviewing team disagrees by more than plus or minus .5 points on the 1-5 scale no yes/no decision be made. Additional assessment data is then required to justify a yes/no decision.

    When there are more than 3-4 interviewers the process goes out of control and nobody ever agrees on anything unless there’s a strong facilitator in the room. Getting the hiring team to even use the rating form is pretty difficult too, but the results are clear that when used as described predicted accuracy increases dramatically.

  68. Thanks again, Lou. I tend to agree with you re: the optimum number- it’s what I recommend to clients. 10,000 is a lot of interviews- still, is there any formal research that shows this? Also,is there any research as to how many interviews/candidate are sufficient to get a good decison? (IMHO and what I say to clients- typically 1-2.)

    Cheers,

    Keith “Keep it Short and Simple” Halperin

Leave a Comment

Your email address will not be published. Required fields are marked *