The Interview Debrief Trap

If you want a job at Google or McKinsey, you’ll have to go through a rigorous process. This process often includes as many as 10 interviews, and requires one to provide six or more references.

The intention is good, for we all know that the more feedback one can gain on a candidate, the better. And this truth was discovered a long time ago. In 1906, for instance, Englishman Francis Galton, a cousin of Charles Darwin, stumbled upon an intriguing contest while attending a livestock fair. An ox was hanging on display, and the visitors at the fair were invited to guess the animal’s weight after it was slaughtered and dressed. Nearly 800 participated, but not one person hit the exact mark: 1,198 pounds. Galton’s insight was to examine the mean of these guesses from independent people in the crowd. Astonishingly, the mean of those 800 guesses was 1,197 pounds —  accurate by a fraction of one percent.

Today, this phenomenon of having more accuracy collectively than any single individual is called collective intelligence, and the field is booming, as research has shown over and over again that estimations coming from many people, in the right circumstances, lead to results closer to the truth. This is because the extremes are essentially cancelled out. That is why it is often referred to as a statistical phenomenon.

Because of collective intelligence, organizations that try to stay on the cutting edge of information and technology think they should interview more individuals, because more is better.

This isn’t always true.

An article called “How social influence can undermine the wisdom of the crowd effect” discusses how we as social beings are subject to influences that will often make people revise their estimations and affect the accuracy of the outcome. In their research, the authors show how the “social influence effect” — such as learning that an interviewer, or your boss, favors a candidate — diminishes the diversity of the group without improving its accuracy. The experiments conducted by the authors tested objective, verifiable data and controlled for social conformity biases.

The group subject to social influence became actually less reliable in guiding the decision-makers. As the group is influenced to head in one direction, and the range of options are reduced, the supreme perverse effect is the “confidence effect,” or overconfidence, as everyone reaches an agreement although the decision may be wrong.

This can translate into a hiring manager who relies on 10 very keen interviewers being supremely confident in their decision and exhibiting a high level of certainty while the accuracy, in reality, isn’t improved a bit.

Article Continues Below

Ultimately, this very fascinating research shows that even though we tend to believe that debating a decision will lead to a better choice, this assumption fails to take into consideration the fact that no one is immune to social influence.

In an interview debrief that organizations tend to perform by phone or in person, the most vocal person usually impacts the outcome and the decisions of all the others. All those involved will eventually merge toward a common assessment and the confidence of the group will be raised. The group will then make a decision feeling very certain they have made a good choice, but a single answer is not the stamp of accuracy.

To prevent this, ask for feedback on the rating of the candidates before any discussion starts. This can be done online before the meeting or even at the meeting before it starts. If you are leading the hiring committee and are leaning toward a particular judgment, write it down rather than consciously or unconsciously leading the group in the direction you wish to take.

The next time you debate the hiring or promotion of an individual, think about Francis Galton and the ox and ask yourself if the decision has been truly driven by diverse feedback or the views of one strong minded, vocal individual.

Yves Lermusi (aka Lermusiaux) is CEO & co-founder of Checkster. Mr. Lermusi is a well known public speaker and a Career and Talent industry commentator. He is often quoted in the leading business media worldwide, including Fortune, The Wall Street Journal, Financial Times, Business Week, and Time Magazine. His articles and commentary are published regularly in online publications and business magazines. Mr. Lermusi was named one of the “100 Most Influential People in the Recruiting Industry” and his blog has been recognized as the best third party blog.



46 Comments on “The Interview Debrief Trap

  1. The Francis Galton ox story outcome rests on the crowdsourcing of a singular construct, weight, that most livestock fair attendees could reasonably be expected to guess with varying degrees of accuracy and likely neutral bias (i.e. as likely to guess high as low). With a large sample (800) of such observations, the central limit theorem should prevail.

    Employee election procedures are vastly different from livestock fair attendees guessing the slaughtered weight of an ox.

    Most of those making hiring decisions do not first conduct/review an up-to-date job analysis to determine what it takes to perform well in the job; most cannot even identify the various constructs that they are attempting to measure in their interviews, let alone establish the job-relatedness and validity of those measurements.

    Then the process advances to the “Group Grope” phase, where participants reach decisions based on collections of largely unqualified opinions, typically under the influence of dysfunctional group dynamics.

    Until employers do as required by the Uniform Guidelines on Employee Selection Procedures – i.e. base their selection decisions on valid, job-related measurements and uniformly applied processes – they will make more than their fair share of hiring mistakes.

    Resume reads and application reviews rank right up there with unstructured interviews and group gropes, in adding unwelcome (detrimental) bias to the employee selection process.

    Best selection practices require valid assessment of constructs in concert with a job analysis. Compliant practices require the same thing.

  2. Thank you, Yves. I challenge those companies who put candidates through large numbers of interviews with hordes of interviewers to provide objective, peer-reviewed studies that show these typically provide better, more cost-effective hiring results than 1-2 rounds with 4-6 total interviewers. Furthermore,often more knowledge can produce worse results (



  3. I agree with you, Richard (JA, validity, and all the rest); however, I am reminded of the literature reporting increased validity from multiple interviewers and interview panels (i.e., even though they did not follow professional practices).

    I wonder if Yves’ “regression to the mean” in unstructured interviews might be a product of fewer false positives, as opposed to more true positives. What do you think, Richard?

  4. Wendell,

    I am wondering if a test could be constructed that would provide a probability score of the likelihood of success of a particular candidate?

    The test would be based on the analysis and test scores of existing high performing employees. I am not necessarily advocating for this; just wondering why we could not apply prediction tools to employment? These type of instruments are currently being applied to weather forecasting, drug development,medical treatments, the likelihood of recidivism for a parole, and many other complex, multivariate situations.

    It would seem possible to more or less relegate the interview to a social event, rather than to a real tool for analysis and selection and thereby end these discussions we seem to have all the time.

  5. Good question…the kind of test you refer to is usually based on biographical data (i.e., bio data)..high and low performers are surveyed for differences, scored, and the results are converted into a questionnaire (i.e., that is given to candidates) predicting the probability of success..

    It’s important to use BOTH high and low performance factors, otherwise, there is no way to know if the factors make a difference or not.

    therfore, accuracy in employment testing comes from two sources: 1) knowing what factors make the difference between high and low performance; and, 2) using trustworthy tests that screen out people who cannot meet minimum standards for each factor..In effect, its a “show-me” game, where the objective is reducing the probability of being wrong.

    You might note the word “wrong”…we can never be perfectly right…there are just too many unpredicable events…We can however, reduce the probability of being wrong by only hiring people who “pass” our screens. If it’s done right, it has about 90% success (i.e., because we screened out more low-scorers).

    Theonly place I have found where csural interviews are useful to breaking the ice.

  6. I am on deadline but can’t resist a thread with my favorite players and subject. I wish we could collect all of the old rounds into some kind of meta-discussion (so we would not have to hit all the chestnuts each time) but this one is a good setup:

    @ Richard, your erudition always impresses. Less so your faith in the Uniform Guidelines as a means to hiring utopia, but agreed that the Guidelines are worlds better than what most organizations practice.

    @Keith, thanks for the link- most interesting. Reminds me of the people who pore over the racing form but still lose as much as first-timers when the bets are paid out, but also underlies the point I make each time with WW.

    The point is that employment success of highly qualified candidates in creative and leadership (i.e highest-impact) roles is an emergent process that can only be dealt with as a set of probabilities in model systems. The person you hire will change and be changed by your organization, and those are the changes that count.

    This the same point that works against a thesis that social influence necessarily undermines or opposes crowd-sourced accuracy of assessment- in an emergent system, there is no telling which factors(s) comprising an assessment are driving final outcomes (again, only bounded probabilities)- as the same social influence that pushes an inferior hire to the fore may just continue to push that person to success, a self-fulfilling result.

    I think wisdom can be found in the annals of love, war, and sport- where the human mind is at its limits. I see so many great generals of history making unpredictable but (later obviously brilliant) personnel moves, and I see that small group bonds, (especially between two people but still tenacious with twenty) are the most powerful motivators and change agents that most people usually experience in their lives, and that disruption of social mobility and small group social contracts can have extraordinarily destructive consequences.

    I see the great coaches as the ones who create identity for a group and have the knack of the right decision at the right moment. There is much more to a great player than simply playing great, and there are great players whose games are really only average, except for the big moments. At the high levels of recruiting and hiring, all the candidates are good. KSA’s are a given, but the big differences are in group fit, flow, temperament, and timing. The pre-hire assessment business will get to the next level when those elements can begin to be described and modeled (social networks may provide the mountains of data needed to start modeling them)….brave new world on the way……

  7. We get your point, already, Martin. After-the-fact conditions affect performance. That’s old news. I’m not sure why you keep pressing this point unless you had some kind of bad experience.

    Since you seem to discount the Guidelines, can you tell me which part you disagree with? The part where you thoroughly understand the job? Use valid and reliable tools to measure candidates? Track adverse impact? Showing documentation for business necessity and job requirements?

    No one argues post-hire conditions are often out of control…The best a professional recruiter can do, however, is make sure candidates come to the job well-skilled and fairly treated. That’s what the Guidelines are all about.

  8. The “weighty” question in hiring is relevant when certain individuals use their position (or weight) to unfairly influence the interview team. In essence where one vote is all that matters and the rest are going through the motions. That is what makes outside reference checking more valid to teams that under duress from people in position.

  9. Man you guys are good! The question referring to the OX and the information gathered, would still end with the Ox being slaughter. I agree whole heartily with Richard in his offering of the Group Grope theory. As well as, the somewhat informative Dr. Ives,PhD, MBA:) albeit great stuff.
    But Doc, really! “regression to the mean” in unstructured interviews might be a product of fewer false positives, as opposed to more true positives.” Yow!
    Don’t pick on Martin to much Doc, after all, experience is what has lead us to gaze in to the Crystal Ball of Human Capital and come away with the “Deer in the headlights” look, from exit interviews and corporate “logic”.
    As for me, testing and conclusions therein, as it always will be, is in the “eyes of the beholder”. Keep this up guys the subject matter experts help me every day in my conversations with the real or unreal world. And Doc, I count on you to tell it like it is.

  10. Always facsinating to ponder the what ifs of assessment…

    What Kevin and Wendell postulates has been kicked around more than a few times here since the 90’s…I can not agree more with the concept of, “high and low performers surveyed for differences, scored, and the results converted into a questionnaire (i.e., that is given to candidates) predicting the probability of success..” A few practitioners out there like People Assessments and Shaker make use of some of these techniques…

    The “but” here is that it is cumbersome, difficult to gain buy in and costly. This is why only a teeny number of companies make use of any form of what you describe, and business success suffers.

    Who ever creates a system that streamlines and simplifies this concept will not only make a fortune, but will completely change the way companies are built… Social gatherings (interviews), marketing documents (resumes or Professional Brand Profiles) and employment advertising will change entirely with the whole employment system as it exists today… the outcome could be as big as the Engagement Economy, or the Industrial Revolution because once al companies hire in this way the percentage of poor performance will melt away – what was that number of accuracy – 90%! That is THE game changer we need…

    Can one of you brilliant guys get busy on that please?

  11. Also, don’t overlook the changing variables. Was the ox largely seen and guessed by the same people in the same general daylight and timeframe? Amazing how several interviews with several people tend to reflect as much their situation as the interviewees.

    In my many years of working with hiring managers and teams I found these things to generally happen:

    1) Hiring decisions were made more quickly when the need to fill was high and the candidates interviewed were of sufficient number and quality. Bottomline – people made hiring decisions when they were ready.

    2) Hiring decisions, even in group hiring, are influenced disproportionately by someone in the process. I can’t count the number of times I watched hires go through that got consensus or majority because someone influential in the process “lobbied” a candidate through to hire.

    3) Group interviewing and hiring often allows too many personal biases into the process. I’m not saying personal bias is avoidable, but some are bad. Ever see a candidate not get hired who was “overqualified?”

    I’ve always thought group hiring works best when it’s done for candidate finalists and to reinforce the inclination of the hiring manager or smaller hiring team.

    Final thought – large group interview teams rarely, if ever, are accountable for poor hiring decisions. Or hiring results at all. Few metrics are pushed to the point of collecting enough info to measure a hiring group’s impact.

    Dalton lived in an agrarian economy. Undoubtedly, many of the people were reasonably familiar with common weights and atributes of ox. I wouldn’t at all be surprised if you asked anyone from any major US city that same question today that not only would it be far off, but it would be very far off.

    Interviewers and interview teams today are only accurate if they’re truly connected to the business needs and environment. So, could a team of 10 be more accurate than a team of 4-6? Probably, but how much more accuracy is necessary when the average tenure in a job has fallen to just over 3 years?

  12. OK…let’s cut to the chase…we want our tests (interviews, etc.) to do two things: pass the highest number of skilled folks (true positives) and fail the lowest number of unskilled folks (false positives)…traditional interviews have about a 50/50 rate because they usually measure the wrong things and rely on (often) unverifiable self-reported data.

    Now, let’s add in the environmental effects Martin speaks about. Suppose we hire 10 people using traditional interviews and another 10 people using the process outlined in the Guidelines. Then we send all 20 to do the same job for the same manager. Which group do you think will do better…the group that showed us beyond a doubt they had the right skills…or, the group that told us they had the right skills?

    That is the issue.

    Organizations DON’T have to do that for every title, they can organize titles into “families”, do a JA and test validation for each family, then keep it updated. The payoff is huge.

  13. Yves,

    Very nice article. I hope it will get people thinking more about the true effectiveness of their interview processes. The process I’ve put together takes care of this issue and I’m glad to see there are others who are looking at this issue.
    @Richard: you are right on with your observation about companies lacking any quality analysis into what they’re looking for.

  14. Kevin, some employers do assess their qualified to be hired job applicants and they do get an Overall Job Match percentage from 25% TO 95%. Users learn that hiring qualified applicants above 85% allows them to hire more top performers. Users also learn that hiring qualified applicants below 70% causes them to hire too many under performers.

  15. Be careful of the “job match” idea..often this means comparing traits of the job to traits of the person..In real life, some traits have more impact on the job than others. Furthermore, the best predictor of job performance (in most jobs) is intelligence (and this is not a trait).

  16. Dr. Williams – dumb question time…so is there a test for intelligence that doesn’t have to be linked to a job type? (IQ??)

    Yves – I meant to say this earlier…I really liked the point you make about “collective intelligence” it is something that I have been interested in for some time…if you were to flip the concept – do you think that SNA or a type of personal network measurement could provide a window on the level of a person’s capability? I’ve thought a lot about this lately and would be interested in your thoughts!

    Thanks for a very thought provoking article…

  17. I’m always amused at Dr. Williams’ comments about what we do.

    “… often this means comparing traits of the job to traits of the person.’

    Dr. Williams is incorrect since we do not compare “traits of the job to traits of the person.”

    “In real life, some traits have more impact on the job than others.”

    Yes, our clients know that and use that knowledge daily.

    “he best predictor of job performance (in most jobs) is intelligence (and this is not a trait).”

    I’m afraid I’ll have to disagree with Dr. Williams on this one. Mother nature or God would not be so cruel as to assign the less than the most intelligent to the bottom rung of job performance ladder. And, there is more to job success than intelligence alone, ask the the managers of the best and brightest graduates of the best schools if they are always successful just because they are very intelligent.

    Does Dr. Williams really believe he is the only Ph.D. that can design an assessment that works? We have more than 50,000 employers world wide in all businesses and in government agencies with their own Ph.D.s; I suppose we can presume that all those Ph.D.s are stupid or incompetent but I prefer to put the onus on Dr. Williams.

  18. Ouch!

    Apparently, Mr.Gately disagrees with my comments. You really don’t know the profession, Sir. Check the literature…Every one of us stupid and incompetent I/O PhD’s has spent years in graduate school studying this field…and, decades practicing it… How about you?

    I am just one voice among many…There are thousands of us who know and recommend best practices. Why don’t you ask any of the PhD’s at the vendor you represent about intelligence and performance correlations (I believe it is PI?)…Then come back to this forum and apologize for promoting wrong-headed mis-information.

  19. To KC…the more learning and problem solving required, the more intelligence is required. Execs usually need a considerble amount of abstract thinking…laborers, not so much..The challenge is setting the intelligence bar high enough to get good people (for XYZ job) , but not so high it screens out too many people.

    It is a dilemma…job success requires the right level of intelligence, but intelligence does not guarantee job success.

  20. Dr. Williams, yes I disagree with your comments. I do not need to be a Ph.D. to recognize an intellectual bully when I read one. I have had discussions with many Ph.D. experts since 1992 and one in particular comes readily to mind. Dr. Kevin contacted me to evaluate our assessments and I was pleased but surprised. After reading the Technical Manual Dr. Kevin started using the assessment. Since I never pass up a chance to speak with a Ph.D. about our assessments I asked him, “You have a Ph.D. so you must know about and have access to many assessments, so why would you choose ours?” He laughed and said, “Yes, Notre Dame University prepared me quite well to use a multitude of assessments, but yours is well-suited and valid for use in an employment situation. I have to be very careful to use assessments that do not put my license in jeopardy.” Many Ph.D.s have told me the same thing.

    I asked another Ph.D., “Why do you use our assessment since you could administer a battery of assessments and then prepare a report that would cost thousands of dollars?” Dr. Ben laughed and replied, “Yes, I used to use a battery of six assessments, then write the report, and then deliver the report along with an invoice for $3,000.” I then asked, “But why do you use our assessment?” He replied, “Because I can deliver a more useful report within several hours than I could after several days and it costs 10% as much. My clients are happier and more successful; it is a no-brainer.” He told me that I should not hesitate to offer the assessment to the business community since it is well designed and users do not need a Ph.D. Yes, that is true, users do not need have a Ph.D. nor do they need to pay a Ph.D. to use the assessment process. I’m sure that Dr. Williams is prepared to share with us the lawsuits that we have lost since 1991, but wait, there are none.

    I am quite familiar with intelligence and job performance. What I do know is that our clients are always surprised when they learn that their best employees are not necessarily their brightest employees. They are surprised as well to learn that many of their problem employees are also their brightest employees. There is much more to job success than intelligence. That said, if I could only use one selection criteria other than competence I would use intelligence but I would know that I would be hiring many bad employees and not hiring many good employees. Also, SCOTUS may take a close look at such a dumb practice, see below. Fortunately for employers they do not have to rely on only one selection criteria.

    The following is from

    Griggs v Duke Power

    Facts of the Case:

    Willie Griggs filed a class action, on behalf of several fellow African- American employees, against his employer Duke Power Company . Griggs challenged Duke’s “inside” transfer policy, requiring employees who want to work in all but the company’s lowest paying Labor Department to register a minimum score on two separate aptitude tests in addition to having a high school education. Griggs claimed that Duke’s policy discriminated against African-American employees in violation of Title VII of the 1964 Civil Rights Act. On appeal from a district court’s dismissal of the claim, the Court of Appeals found no discriminatory practices. The Supreme Court granted certiorari.

    Did Duke Power Company’s intradepartmental transfer policy, requiring a high school education and the achievement of minimum scores on two separate aptitude tests, violate Title VII of the 1964 Civil Rights Act?

    Yes. After noting that Title VII of the Act intended to achieve equality of employment opportunities, the Court held that Duke’s standardized testing requirement prevented a disproportionate number of African-American employees from being hired by, and advancing to higher-paying departments within, the company. Neither the high school graduation requirement nor the two aptitude tests was directed or intended to measure an employee’s ability to learn or perform a particular job or category of jobs within the company. The Court concluded that the subtle, illegal, purpose of these requirements was to safeguard Duke’s long-standing policy of giving job preferences to its white employees.

    The SCOTUS refused to overturn the appeals court in Jordan v. City of New London in which Jordan was not hired based on an IQ test result that was too high. Anyone who wants to use intelligence to screen out applicants is free to do so but you better be damn sure it is job related, see above.

    One last example, a manufacturer selected our assessment to replace a local Ph.D. When I asked her why she said, “Your assessment gives us more useful information in an hour than the Ph.D. gives us in a week and it costs 80% less.

    What do all these people know that you do not know Dr. Williams?

  21. Mr Gately… you are commenting on a subject you know little about…If being considered an intellectual bully means shining light on quack-science, then I accept your comments gladly. Everything I write is in the Guidelines and Standards. It’s also standard practice in every large assessment consulting company I have known, every major corporation I have worked with, and every professional I/O I have met. It also represents best practices in the hiring and selection field…Of course, you are entitled to your personal beliefs and interpretations…

  22. Dr. Williams – thanks very much in laying out the differences – I totally get it…and it is pretty much common sense when you step back and review it.

    We have a few logic and reasoning assessments that if provided accurately to the correct level will be helpful…its one of many tests that we will be offering that will enable a person to demonstrate what makes them unique…thanks again I REALLY APPRECIATE YOUR advice!

  23. You accuse other Ph.D.s who create assessments as practicing quack science, is that the best you can do? I’m sure you are aware that I did not create the assessment nor did I write the technical manual nor devise how to use the assessment. Your cavalier use of the phrase “quack science” is an insult to the professionals who created the assessment. In fact, it is unprofessional of you to denigrate the assessment and its developers just because you do not like the messenger. It is childish as well. I expect better from a Ph.D. I never tell an employer not to hire a Ph.D. since a Ph.D. can do much more than I can do. That said, I do advise employers that a Ph.D. is not required to use our assessment. The Ph.D.s who developed our assessment designed it so that users do not need to be a Ph.D.

    Our assessment and method conforms to the “Uniform Guidelines on Employee Selection Procedures” and our method conforms with the publication “Testing and Assessment: An Employer’s Guide to Good Practices,” (U.S. Department of Labor, ETA, Office of Policy and Research, 1999), as well as professional standards which helps explains why so many licensed Ph.D.s, hospitals, government agencies, including courts and police departments, use our assessment. The facts do not support your conclusions. You really should provide something more than an ad hominem argument which is unbecoming for a PhD.

    I advise all users of assessments to read the Uniform Guidelines and the Testing and Assessment publication prior to selecting and using assessments. Click on the links below for access to both publications.


    The following link takes you to the Uniform Guidelines table of contents with likes to each sections content.

    Title 29–Labor



    1607.1 Statement of purpose.
    1607.2 Scope.
    1607.3 Discrimination defined: Relationship between use of selection procedures and d discrimination.
    1607.4 Information on impact.
    1607.5 General standards for validity studies.
    1607.6 Use of selection procedures which have not been validated.
    1607.7 Use of other validity studies.
    1607.8 Cooperative studies.
    1607.9 No assumption of validity.
    1607.10 Employment agencies and employment services.
    1607.11 Disparate treatment.
    1607.12 Retesting of applicants.
    1607.13 Affirmative action.
    1607.14 Technical standards for validity studies.
    1607.15 Documentation of impact and validity evidence.
    1607.16 Definitions.
    1607.17 Policy statement on affirmative action (see section 13B).
    1607.18 Citations.


    The following link takes you to a pdf version of “Testing and Assessment: An Employer’s Guide to Good Practices,” U.S. Department of Labor,


  24. You are welcome…at the bottom of it all, highly accurate selection is just basic sense: it starts with thorough understanding of the job (job requuirements and business necessity) and ends with professional use of tests (valid, reliable, multi-trait and multi-method).

  25. “…you are commenting on a subject you know little about…”

    I was advised in 1992 that I would run into experts like Dr. Williams who will do and say whatever they can to disrupt a legal and effective business practice. Perhaps that is your problem Dr. Williams, our products are both legally defensible and very effective and users do not need to employ or use a Ph.D. to tell them it works, they can see it for themselves.

    “If being considered an intellectual bully means shining light on quack-science, then I accept your comments gladly.”

    Oh my, how unprofessional of you to accuse the Ph.D. that developed the assessment of “quack-science.” Do you have no shame? You owe the Ph.D. an apology even if you believe that only a Ph.D. should sell assessments, which would be foolish business advice, since it isn’t up to you how a Ph.D. brings an assessment to the business community. You may not like it that is the way it is. You really should avoid attacking a business model that works so well for the business community. Are you man enough to admit that you have no factual knowledge about our assessments other than what you read in on-line forums?

    Perhaps you think that only Ph.D. aeronautical design engineers should sell airplanes or that only Ph.D. automotive design engineers should sell cars? Your implication that only a Ph.D. can sell assessments is silly on the face of it since a Ph.D. has more value doing whatever it is a Ph.D. does well than selling assessments.

    Sales is not design and sales people do not need to know as much as the Ph.D. who designed the assessment but the sales person does need to know how to use the assessment as described in the Technical Manual.

    “Everything I write is in the Guidelines and Standards.”

    Our assessment process conforms to the Guidelines and Standards as well. If you have evidence to the contrary, please share it with me so I can verify the accuracy of your evidence. You do know that criticizing the business model is not evidence of anything other than a conformation of your bias for a different business model. I certainly hope your comments are not rooted in concerns for decreasing my business success since that would be unconscionable.

    If you do not offer serious evidence, I will assume your accusations are baseless and perhaps a business tort, i.e., disparagement. The following is from

    “In torts, a considerable body of law has come about concerning interference with business or economic relations. The tort of injurious falsehood, or disparagement, is concerned with the publication of derogatory information about a person’s title to his or her property, to his or her business in general, or anything else made for the purpose of discouraging people from dealing with the individual… Disparagement of goods is a false or misleading statement by an entrepreneur about a competitor’s goods. It is made with the intention of influencing people adversely so they will not buy the goods.”

    In light of the above quotation I am sure you will refrain from making unsubstantiated accusations. Posting comments about me are irrelevant to the efficacy of using our assessments but I suspect you already know that so why you do it? Answer not needed since your failure to supply credible evidence will speak volumes.

    “It’s also standard practice in every large assessment consulting company I have known, every major corporation I have worked with, and every professional I/O I have met. It also represents best practices in the hiring and selection field…”

    I agree and our assessments do conform to The Uniform Guidelines on Employee Selection Procedures (UGESP). If you do not know this, then perhaps you need to reread the Guidelines after you learn how and why our method works and why they conform to the Guidelines. The information is in the Guidelines if you really wanted to find it you can.

    “Of course, you are entitled to your personal beliefs and interpretations…”

    But you are not free to publish “derogatory information about a person’s title to his or her property, to his or her business in general, or anything else made for the purpose of discouraging people from dealing with the individual.”

    Be a gentleman and cease and desist from posting unsubstantiated allegations.

  26. Dr. Williams, I don’t care about what you think I only care about what you write.

    I’m sure that discerning readers will see your non-response as a tacit admission that you know very little about our assessment or what I do.

    You cannot bully me into submission. I have been dealing with such ad hominem attacks since 1992 and you are not very good at it but you are persistent.

    For readers who are unfamiliar with what we do I’ll explain it, see below.

    We have been showing hiring managers how to hire successful employees since 1991 and the methodology dates back to the 1960s; the tools are new.

    20 years ago we learned that hiring managers know how to hire competent employees because that is how they hire, for competence.

    We also know, because hiring managers tell us, that too many unsuccessful, competent employees get through the screening process.

    What is it about job performance that is missed when we hire for…
    • Age
    • Alma Maters
    • Competence
    • Education
    • Experience
    • Gender
    • Intelligence
    • Interviews
    • Race
    • Recommendations
    • References
    • Salary Histories
    • Salary Requirements
    • Work Histories

    Answer: Job related behavior, especially when stressed. Successful, competent job applicants who became unsuccessful employees did good jobs of controlling their behavior during the interviews. Measuring the future behavior is the key.

    The following section is from an article at Corporate University Xchange on the following web page.

    “Hiring for talent increases the number of good hires and avoids the bad hires. If we want to be sure that all our new hires and employees become long-term successful employees, we need to make sure that all employees are competent and have a talent for their jobs.

    For employees to find job success…

    • talent is necessary, but not sufficient.?
    • skills are necessary, but not sufficient.?
    • training is necessary, but not sufficient.?
    • orientation is necessary, but not sufficient.?
    • knowledge is necessary, but not sufficient.?
    • competency is necessary, but not sufficient.?
    • qualifications are necessary, but not sufficient.?
    • effective management is necessary, but not sufficient.?
    • successful interviews may be necessary, but not sufficient.?
    • exhibiting the appropriate behavior is necessary, but not sufficient.

    Talent is the necessary condition for job success that employers cannot provide their employees and schools cannot provide their students. Most employers don’t measure talent so they can’t hire for talent even if they do hire the best and the brightest. Talent and competence are necessary but they are two different things. Selecting for competence and talent avoids most performance problems.”

    Hiring unsuccessful employees is a choice, but not a requirement for doing business.

    Be sure to read the link above since there are two conditions when competent people should “not be” hired or selected for a position.

    Hiring for talent is the secret but if we can’t answer the five questions below with specificity we can’t hire for talent.

    1. How do you define talent?
    2. How do you measure talent?
    3. How do you know a candidate’s talent?
    4. How do you know what talent is required by each job?
    5. How do you match a candidate’s talent to the talent demanded by the job?

    Thanks for reading,


  27. Mr. Gately, I was with you there for awhile, as WW can be aggressive and his faith in the power of academe is as touching as Richard’s faith in the Uniform Guidelines. But then you went and lost me posting this limp word salad regarding “talent”.

    Any serious argument that states that “we want to be sure that all our new hires and employees become long-term successful employees” is clearly not serious, but simple sales-patter. To extend the unreality, you expect anyone to follow that Competence, Education, Experience, and Work History are secondary factors to some intangible, undefined Platonic essence of Talent ?

    My argument goes to your assessment and methods as well as WW’s whole edifice- yes, they are doubtless far more effective than random chit-chatting about candidates, but they are variations on the same fallacy and limit: that individual assessment of the person and the job are the ne plus ultra.

    History is chock full of suprise performances.

    Assessment must advance to deal with hiring as a chaotic emergent process, and to do so, it must include far greater in-depth modeling and understanding of group dynamics and the environment facing the organization.

    Good recruiters almost always generate qualified candidates. At important levels of leadership and creativity, almost everyone is “good”. Good leaders could likely draw good performances from most any of them.

    What matters is how the candidate and the people they are going to be most involved with are going to work together, because small-group bonds are the strongest of motivators and small-group collective intellegence is likely the most powerful, as actionable CI may not scale well.

    History says so everywhere you look.

    When I make a key hire, all I really care about is what our other key people are going to think of the person- what THEIR assessment of performance and potential AFTER the fact is going to be when the people who don’t yet exist make those reflections; they won’t exist until they create each other. Get it ?

    PS, invoking legal threats on a discussion board when your feelings are hurt is beyond lame.

  28. Martin, employers need to make fewer bad hires and more good hires and if in making fewer bad hires employers miss a good hire then that is the price of improving selection process.

    Would you mind sharing your answers to the five talent questions?

  29. Bob, the first three are tautologies, the last two are handled well by WW (and the rest of the I/O world).

    Riddle me this: why would you not re-interview and assess for fit each the close expected co-workers, and then re-assess performance potential as a group ?

    No coach hires a quarterback without thinking of his (or her) O-line and recievers first, and vice versa for line players and ball catchers.

    Would you mind sharing your definition of ‘Talant’ ?

  30. Martin – I think your comments on small group collective intelligence is beyond interesting. I agree that once a person is hired, the ability of that person to work within the team or group determines that person’s job success (unless the job is one where the new hire works entirely on their own).

    I have recently been intrigued by assessments that measure coherence – group dynamic traits. Being that you brought it up – I was wondering if you had found any tests for coherence that you could share…group interaction and engagement is critical in our new Engagement Economy, and it would be tremendous to be able to measure how a person normally relates to group situations and is best suited for group interactions… (obviously if anyone else knows where to look that would be great too!)

  31. @Bob G, @Dr. Wendell W:
    Could you restate your original points (before the p***ing contest started and you started making personal attacks) in 25 plain-English words or less? I lost the thread and I think both of you have useful things to say…

    Thank You,


  32. My pleasure…If I have this right, Yves suggested collective decisions might be better than individual ones, yet multiple interviews could be influenced a strong-minded person.

    I cited research showing using more interviewers tend to produce better hires than using fewer interviewers. I went on to offer the idea these results might come from making fewer bad-hires instead of making more good-hires. I don’t think the idea of behavioral interviewing was ever mentioned.

    In addition, I suggested there are some well-researched tests and tools that deliver probabilities of success based on biographical data but these are costly to build. I also suggested the most accurate assessments (measures of job skills) are ones that measure multiple traits (i.e., the whole job) multiple times (i.e., using different methods).

    ‘That help?

  33. Keith, I was responding to the following

    “Be careful of the “job match” idea..often this means comparing traits of the job to traits of the person..In real life, some traits have more impact on the job than others. Furthermore, the best predictor of job performance (in most jobs) is intelligence (and this is not a trait).”

    Dr. Williams is wrong that employers need to be careful of the job match idea unless he is making the useless point that hiring managers should always be careful.

    Also, Dr. Williams is wrong that job matching means, “comparing traits of the job to traits of the person.” If he means other assessments do so then let him be specific and not leave it hanging that our job matching does something that it doesn’t do.

    “In real life, some traits have more impact on the job than others.” This a mainstay of the job matching method but if you read Dr. Williams you would think we ignore it.

    It would be nice if Dr. Williams took the time to learn about the approach we use that allows employers to decrease turnover and boost new hire productivity.

    “The best predictor of job performance (in most jobs) is intelligence” is offered as a knock on our method yet our clients know which of their jobs require such mental horsepower and which do not. More is not always better.

  34. Martin, “why would you not re-interview and assess for fit each the close expected co-workers, and then re-assess performance potential as a group ?”

    Employees are assessed first.

    Benchmark success patterns are developed from the assessment results of the top performers.

    Benchmark failure patterns are developed from the assessment results of the bottom performers.

    Benchmark patterns are developed from the assessment results of the average performers.

    The scales, there are 20 scales, that have all top performers on one side and all bottom performers on the other side are identified as critically important to job success.

    We don’t do interviews and we only assess people once. There are 10 reports that can be printed as needed as the person goes from applicant to employee and into new job assignments and promotions, all reports are free.

    The method answers the following questions.

    1. Can the person learn and do the job in the time required?

    2. Will the person behave on the job as the job demands, especially when stressed?

    3. Is the person sufficiently interested in the job to want to do the job for a long time?

    “No coach hires a quarterback without thinking of his (or her) O-line and recievers first, and vice versa for line players and ball catchers.”

    I quiet agree and I find it interesting that at the annual spring NFL combine before the NFL draft, all players go through a psychometric assessment. What do they know?

    Our approach allows managers to hire people who will thrive working for her and working with coworkers.

    “Would you mind sharing your definition of ‘Talent’”

    If you are interested to see how our assessment presents talent data, please visit each of the web pages below to see what a state-of-the-art assessment looks like.

    The following web pages, see links below each item, for the famous and infamous Fit Family demonstrate various degrees of Overall Job Match for the GCM/A position. The talent pattern for each scale is the shaded numbers and usually not more than 3 or 4 numbers wide.

    1. a Bad Job Fit, i.e., Overall Job Match less than 70% (42% for Bobby Bad Fit)

    2. a Marginal Job Fit, i.e., Overall Job Match of 70% to 80% (75% for Mary Marginal Fit)

    3. a Good Job Fit, i.e., Overall Job Match of 81% to 90% (88% for Georgina Good Fit)

    4. a Great Job Fit, i.e., Overall Job Match of 90% and above (95% for Billy Great Fit)

    5. What the scores mean.

    Clients assess their qualified job applicants, usually 3 to 5 for each position to be filled, so each person assessed is competent.

    Our clients tell us that they had always hired competent job applicants but they just could not identify future unsuccessful employees.

    The method has been around since the 1960s and we have been bringing it to the business community worldwide since 1991 and I have been doing it since 1992.

    Martin, I hope I answered your questions.


  35. @ Dr. Williams:
    Thank you. This interests me in particular because I have believed that the cost (time, resources/dificulties in coordination/consensus-building) of larger/more interviews outweigh the benefits of increased input/information.
    I’d like to read these studies in detail- perhaps the interview conditions were limited so that larger/more interviewss are largely impractical, or MAYBE I’M WRONG and fewer/smaller isn’t usaully/always better.

    I’d hoped you would have used the opportunity to make an affirmative statement(s) about what you believe/have found out, as opposed to another opportunity to attack.


  36. I understand your concerns about research…anytime there are big dollars involved, there is an opportunity to fudge..Unfortunately, this is the case everywhere..

    In our field, research is done; the article is submitted to an editor; IF it is accepted, the editor sends that article to multiple reviewers who are also experts; the reviewers make comments and suggestions and send the paper back to the editor…The process repeats until everyone is satisfied. ‘Not perfect, but better than nothing.

    If you want to read all the stuff on interview research, go to any university library and ask for the Psych-Lit or Psych-Info data base…Then do a search on the key words “interview” and “validity” …Most studies will show highly structured BEI formats work best. The best BEI’s recommend 2-3 interviewers integrate their data post-interview.

  37. Thanks, Dr. Williams. I’m a bit confused- it sounded to me that the earlier information indicated that a higher number (10?) of interviewers provides more information/better hiring results, but here it seems that 2-3 interviewers are best. Is the first (10?) unstructured vs. the other (2-3) structured/BEI?


  38. Sorry to confuse…the most accurate way is BEI, 2-3 interviewers, and data integration…However, there are a couple studies showing multiple interviewers (not using BEI) do a better job than using just a few.. I break it down this way:

    BEI Process = job analysis data + highly structured questions + considerable interviewer training + focus on candidate skills (in addition to outcomes) + 3 interviewers all seeking similar evidence of skills + data integration.

    Traditional Interview Process #1 = 2-3 interviewers + job description + easy to fake + untrained interviewers all seeking different things

    Traditional Interview Process #2 = 5-10 interviewers + job description + harder to fake + untrained interviewers all seeking different things

  39. Thanks again. So if I understand properly: 2-3 trained BEI interviewers or 5-10 untrained interviewers work best. (An analogy: if you’re hunting, use a rifle or both barrels of a two-barrel shotgun, not one.)
    I wonder how the cost of training people in BEI compares with the benefit of having the need for fewer interviews/interviews?


  40. The 2-3 trained BEI interviewers consistently works best…better candidate feedback + consistent results + lots of confirming studies

    It seems to me that 5-10 untrained interviewers could be intimidting, a nightmare to schedule, take more person hours, be less professional, and more demanding on the candidate to name a few..also fewer studies on this method

  41. Almost all of my clients over an 8 year period – (mostly) multi-billion dollar in size – conduct the 5-10 untrained scenario – and as you mention – scheduling it is a bigger challenge than the interviews themselves…(or so it seems). Due to the penchant to conduct this type of review – from candidate feedback we really get to know the interview styles of a LOT of mgmt. This helps immensely and we do very well as they rely on us to provide candidates that will make it through the gauntlet…others usually don’t have a chance of getting cands through it consistently…

    Yup, there has to be a better way!!

  42. @ Dr. Williams, K.C:
    I agree- there has to be a better way (2-3 BEI-trained interviewers seems to be one) but the larger/more arrogant firms (We know who you are!) can get away with treating candidates like dirt.


Leave a Comment

Your email address will not be published. Required fields are marked *