Sense, Common Sense and Nonsense

I’ve been wandering around the Web lately, just looking at some of the claims and counterclaims made by companies who sell “hiring solutions.” It is incredible! One vendor predicts exactly how much each hiring test adds to hiring accuracy. Does this sound good or what? How about if you knew that gathering this kind of data would require hiring 200 people using test “1”, hiring another 200 people using test “1 and 2”, hiring another 200 people using test “1,2 and 3”, and so on, until you have six groups of 200 people each. Ok, now that you have hired 1200 people for the same job using different tests, wait around six to nine months then measure the performance of each group. Ok so far? Good. Next, statistically compare personal performance to each set of hiring tools. Now, the really, really big question: Name just one business that would go to all that trouble and rigor. Don’t know any? Neither do I. (And, I think it is safe to say, neither does anyone else.) The next site claimed their job descriptions and interview guides were “ADA compliant.” I’m not exactly sure what that means. Written in LARGE PRINT? Don’t come right out and ask someone if they are disabled? Not even close, folks. ADA requires employers identify something called “essential functions” for each specific job. In practice you put this information to work by asking something like, “This job requires you lift paperclips weighing 150 pounds. Can you do that with reasonable accommodation?” A generic job description could not possibly include that kind of detail. (So, maybe the vendor’s job descriptions comply with rules of the “American Dental Association”?) Still others claim their written tests can accurately predict sales or management success (brazzzapp!). Give me a break! Since when does a written test accurately predict job skills? Skilled researchers rate a written test as a whopping success when it predicts about 4% of the variance in performance. That still leaves 96% to chance. Want to bet your salary on test results that leave 96% to chance? Test vendors seem to make some gigantic claims. It is too bad they seem to totally ignore established research on the subject and seem to think statistically valid research studies are unimportant. In the vast majority of cases, you should consider vendor test claims to be utter nonsense. Basically it all comes down to being a very informed buyer. And, as an informed buyer, you need to always remember: vendors have no responsibility for test use. Like it or not, users have full responsibility for both legal and hiring effectiveness. You need to ask, if vendors don’t know the basics of their profession, what impact it will that have on your organization? But let’s discuss why this stuff won’t go away… Take a Chance? Weed out the “bottom feeders” and flip a coin with the rest. Heads you’re hired. Tails you’re toast. Even bad systems hire good people about half the time. Think about it, you could hire people based on shoe size and still be 50% right! Combine those facts with our human nature to remember successes and forget failures, and you cover most selection success claims. Gone Fishing? Hiring professionals seldom stick around to observe first-hand their triumphs and failures. They have more people to find and more jobs to fill. New employees tend to drift away into the corporate hallways where they eventually become someone else’s problem. The consequences of a bad hire seldom turn up until months or even years have passed. By that time a lot of water has passed over the dam. Up Close and Impersonal? There is an old joke involving either a stubborn camel or a donkey. The punch line involves whacking the contrite animal using either bricks or lumber on a sensitive part of its anatomy. Trainers would call this “personal” feedback. Hiring managers would call this a “performance problem.” Bad hires seldom live side by side with hiring professionals or test vendors. They live with a hiring manager who is often reluctant to say anything bad about them because it would reflect on his or her decision-making reputation (not a good thing for climbing the corporate ladder). So good performers get trumpeted and bad performers tend to get quietly swept under the rug. What, You?re Giving Me a Test? It doesn’t take much to put up a website and start spouting nonsense – sort of like old-time hucksters rolling their wagons into a small towns and pitching snake oil to any one with a buck. The only way to avoid being snookered is to learn as much as you can about hiring tools and best-practice processes. It is a long road, but you can quickly pick up a few street smarts by pondering the following questions:

  1. Which of the following do you think delivers the most accurate data about applicant job behavior? A. Giving applicants a written test, trusting them to be honest, and making the assumption they will actually act that way on your job. B. Interviewing applicants about their challenges and failures, trusting they will be honest, and making an assumption they will actually act that way on your job. C. Putting them in a carefully controlled job-like environment, asking them to perform their way out of it, and evaluating whether or not they were effective. D. Asking them if they believe people secretly communicate using their belt buckles.
  2. Which of the following do you think delivers the most accurate data about applicant ability to solve challenging mental problems? A. Giving them a personality test that asks questions about whether they like to solve problems. B. Asking them questions about when they had to solve a tough problem, assuming they are honest with you, and assuming they are really as smart as they say they are?. C. Basing your decision on a 60-minute meeting where you chatted about nothing in particular. D. Give them a written job-like exercise that requires rigorous problem solving and mental analysis?
  3. Which of the following do you think delivers the most accurate data about applicant ability to plan and organize activities? A. Having them check-off adjectives that describe themselves and use the scores to predict job skills. B. Asking them to complete a written, job-like exercise that requires untangling complex issues and organizing a plan of action. C. Asking them questions about a past project they were involved in, trusting they were honest with you, and assuming they really did it without outside help? D. Asking them what it was like being abducted by aliens and taken to the planet Snark for interrogation.
  4. Which do you believe delivers the most accurate data about applicant attitudes, interests and motivations? A. Asking applicants questions about what they like or dislike about working, and assuming they were honest with you. B. Giving applicants a validated test that measures whether they have the same attitudes, interests and motivations as both high and low performers? C. Sneaking up behind applicants when they are not looking and blowing an air-horn to observe their reactions under pressure? (Just a suggestion, you might want to have a paramedic handy.) D. Pretending you are a psychologist and asking applicants what kind of animal they would like to be?


Article Continues Below
  • If you want accuracy, “show me” will ALWAYS be more accurate than “tell me.”
  • Unstructured interviews may meet an interviewer’s “get to know you” need, but they produce results that are no better than chance.
  • Structured interviews are somewhat accurate (about 10%), but are still self-reported stories.
  • Mental alertness tests are very good predictors of performance (about 25%), but have major adverse impact.
  • Job-specific simulations are the most accurate predictors of performance (about 70%), but they take the most time.
  • A good selection system will produce demographically diverse people with highly consistent job skills.
  • You don’t have to hire anyone who is not qualified for the job.
  • The Web is ripe with empty words and nonsense claims.
  • Watch out for bogus tools that could help your attorney buy a new Mercedes or worse yet, cripple your organization with marginal performers.
  • Your only defense against nonsense is to become an informed buyer.
  • You can read about best (and legal) practices at the following website:



3 Comments on “Sense, Common Sense and Nonsense

  1. While I appreciate Wendel’s fierce and repeated attacks on the rampant folderol that appears in the online screening and selection space, there is a principle even more basic than those found in industrial psychology that trumps many of his psuedo-scientific pronouncements: “Two wrongs do not make a right.”

    First, the profuse use of r squared in order to frighten unsophisticated personnel decision makers ignores the work of Brogden and others (in the 50s) that point up the relative meaninglessness of the term. The validity correlation itself is a direct index of utility (and it’s hard enough to understand without sqaring it).

    Second, citing a validty for structured interviews of .33 undershoots the average of recent meta-analyses by 20 points.

    An obsession with local validation studies in the face of the validity generalization evidence represents a collosal waste of time and effort. Better to spend that time on understanding role requirements and how they relate to O*NET benchmark positions or competency dimensions. I suggest spending some time “catching up” with the writings of Cascio, Schmidt, Campion (both of them), Boudreau, Borman, and Hough.

    Understanding how the internet lifts the veil of secrecy on scoring keys will become a powerful motivator to discovering approaches for truely understanding confirmable performance competencies than we have had in the past.

    Now I understand and appreciate the “wet fish across the forehead” approach as much as anyone. Let’s just keep the fish fresh in order to avoid confusing the message.
    You can read the original article at:

    Post your own Article Review

  2. For a response to a common sense approach to metrics on testing of candidates with the epithet “pseudo science” you surely marched right into “pseudo science” with your argument. In all of the high sounding jargon and names you dropped there is not a shred of “science” any more than there is in “marketing studies”, “focus groups”, “surveys”, “polls”, or any other statistical quantification of “data”. These are only bits of information to consider in an overall whole, not much more useful than one’s experiences and COMMON SENSE. None of it is “science”. Science is proposing a specific outcome based on a concrete theory and testing for that outcome. The outcome must be fundamentally 100% repeatable to validate the theory. NO “testing” we will ever do will EVER be remotely close to predicting future candidate performance to that degree regardless of how “scientific” we make ourselves sound with terms like “meta-analyses”. It seems to me that Wendell is right on target.

    You can read the original article at:

    Post your own Article Review

  3. Your lack of understanding around the word “science” is only matched by your lack of understanding around the word “psuedo-science”.
    We would have to abandon all but Newtonian physics were we to insist on absolute repeatability in order to call something “science.” The arrogance of the under-equipped and partially trained never ceases to amaze me. No doubt there are plenty of consumers for strong prononcements that align with common frustrations. I guess I have always found lining my pockets on someone else’s ignorance to be a little less than satisfying.

    You can read the original article at:

    Post your own Article Review

Leave a Comment

Your email address will not be published. Required fields are marked *