You Validate Parking, Why Not Tests? Part 2

In Part 1, I discussed a major lawsuit involving a large corporation and used it as a learning example. In Part 2, I’ll discuss how common HR practices can lead to similar challenges for other companies. Silly Test and Interview Practices Don’t get me wrong, I believe very much in using tests and good interview technology in the hiring process. They are quick and efficient ways to predict future job performance ó but only if they are job-related and someone has taken the time to formally validate them. There are four general kinds of validation HR needs to be concerned with:

  1. Face validity. Does the test content resemble the job? For example, don’t ask questions about a person’s hobbies if you aren’t hiring them to work in a hobby shop. Lack of face validity is cited as a major stimulus for legal challenge. Many interviewers and trainers like to think of themselves as amateur shrinks. Avoid personally invasive items or items that make you sound like a psychologist. Unless you have a Ph.D. in selection psychology, psychobabble can, and probably will, get you into trouble.
  2. Content validity. Does the test measure job content? For example, asking questions about programming knowledge is okay for a programming job, just as taking a typing test is valid for a typist. But asking a programmer to take a general personality test is a “no-no” unless you have proof that test scores and job performance are related.
  3. Criterion validity. Do test scores have a direct relationship with some aspect of job performance? For example, being able to pass a programming test (content validity) and being able to “out-code” other coders are two different things. Criterion validity means high scores predict high performance and low scores predict low performance.
  4. Construct validity. Does some deep underlying factor affect job performance? Constructs are tricky to validate. Take mental ability, for example. Highly successful people are almost always smart, but smart people are not always highly successful. So what do you think? Should you use mental alertness tests to hire employees? If you don’t know, you are in trouble.

If you find all this validation and job analysis stuff confusing, you have three choices: 1) you can do nothing, hope your “test” scores predict performance, and wait for the inevitable legal challenges; 2) you can go back to school to get an advanced degree in the subject; or 3) you can hire an expert to help you. If you are not serious about how you hire and promote people, I can assure you the government will be. 13-Step Program to Getting Your Company’s Name on the Front Page If an expensive hiring lawsuit is what your company is looking for, then this 13-step program is just for you!

Article Continues Below
  1. Ask interview questions like, “If you were a tree, what color leaves would you have?” or “What you would do if a subordinate had B.O. so bad it triggered the sprinkler system?” The answers won’t predict performance, but you will sound very sage and insightful.
  2. Ignore line managers and jobholders when developing competency lists. HR and training knows infinitely more about line jobs than jobholders, anyway.
  3. Be sure to use tests that were not developed for hiring. They will help you learn personal information so you can ask the applicant for a date prior to making a job offer.
  4. Never, ever conduct a study comparing test scores with job performance. You might discover the test really doesn’t work and then you would have a lot of explaining to do.
  5. Ignore the EEOC Uniform Guidelines and trust foolish vendor claims that they aren’t very important. You won’t see your vendor at court, but you’ll have the piece of mind knowing they are thinking about you.
  6. Be sure to use the same test with everyone who applies. It makes scoring much simpler.
  7. Have every applicant tested by a clinical, counseling, or sports psychologist. Research shows they use the wrong tests, tend to test for mental health instead of job skills, and seldom use predictive reports. Nevertheless, it helps diffuse responsibility. All those credentials really sound good and you can always say, “I told you so.”
  8. Never follow up on the success of your hiring and promotion decisions. You may discover things you don’t like and then people will question your teamwork and company loyalty.
  9. Be sure to treat demographic groups differently. Only give tests to people you don’t like or think are stupid or ugly. Be sure to use reading and mental alertness tests, even if reading and mental alertness is not important to the job.
  10. Always complain that screening people out early in the hiring and promotion process increases your work and reduces your applicant pool. Save a few bucks on hiring tests and hire people without full job-skill data. It is easier to hide the cost of low performance, high turnover, and high training expense in a line manager’s budget than in HR.
  11. Ignore 30 years of hiring research and rely instead on “gut” feelings. Gut feels good and, besides it will take six to twelve months before anyone will know if you hired a good employee or not. By then, you can fill more open positions with marginal employees. Be sure to snicker at line managers’ inability to solve their people problems.
  12. Never check your hiring or promotion accuracy. You just find ’em. Let line managers do the rest. Out of sight, out of mind.
  13. Never assume responsibility for being the “people expert” department. Let HR employees “learn while they earn.”

Conclusion: Think about why I stopped counting at step 13.

Topics

3 Comments on “You Validate Parking, Why Not Tests? Part 2

  1. Suggest you also look at studies conducted by Smith & Robertson 1993 in relation to the validity of selection methods. Their work is excellent!

    Also didn’t broach on the issue of reliability (i.e. does the test produce consistent results)
    A test can be reliable without being valid!

    Also studies from Hunter & Hunter 1984 would suggest that even the the most valid tests are at best about 10% effective in predicting suitability for a role. (therefore 90% still left to chance) Would be very interested in hearing your thoughts on methods available that increase this validity.

    Amazed that new methods are not being developed or investigated as I believe recruitment is one of the most important activities a company involves itself in. Roberts G, (1997) ‘Recruitment & Selection. A competency Approach’ IPD suggests that “it is not possible to optimise the effectiveness of human resources, by whatever method, if there is a less than adequate match” To put it more crudely you can’t polish a turd!

    Are there any difinitive recruitment methods?

    You can read the original article at:
    http://www.erexchange.com/a/d.asp?cid=38CB8308CC634A58AAB8AC1B8EC6C085

    Post your own Article Review
    http://www.erexchange.com/p/g.asp?d=M&cid=38CB8308CC634A58AAB8AC1B8EC6C085

  2. You make some “valid” points. Unfortunately, there is only so much about validity one can write (in a lay column) without putting readers to sleep. I feel I have done a good job if I can just get people to THINK about discovering ANY kind of formal linkage between test scores and job performance instead of buying vendor nonsense about validity or using tests that were never developed for selection; especially, as you pointed out, if I can get people to recognize the inherent weakness between a written test score and actual job performance (ala, Wernimont and Campbell). As I often tell clients, training enhances skills. It was never intended to fix hiring mistakes.

    Wendell

    You can read the original article at:
    http://www.erexchange.com/a/d.asp?cid=38CB8308CC634A58AAB8AC1B8EC6C085

    Post your own Article Review
    http://www.erexchange.com/p/g.asp?d=M&cid=38CB8308CC634A58AAB8AC1B8EC6C085

  3. This is an addendum to my earlier reply. It is in response to your my ideas for improving the validity of hiring tests. Normal people should just ignore all this gobbledegook. It is the way techies talk (it makes us sound smart, is designed to intimidate our listener and shows that we are totally out of touch with reality).

    Hello, Ronan. If the whole idea behind hiring tests is to predict performance, then predictive accuracy, in large part, depends on the strength and nature of the relationship between the predictor and the predicted.

    For example, validity could be significantly increased by recognizing that:
    1) The relationship between test scores and job performance is not always linear. Although traditional statistics are the tool of choice, the nature of the source data invalidates many statistical assumptions of linearity, normality and equality of variance. For example, there is a point where being too smart of the job leads to boredom and reduced performance. If an investigator uses regression, ANOVA or correlation coefficients to examine the validity of G versus performance, odds are, the alpha will be unrealistcally low. In this case, AI is a better choice for validating a relationship because it does not assume linearity.
    2) The predicted variable is often flaky (a technical term often used in the US to indicate “crap”). Supervisor ratings, for example are notorious for their subjectivity and lack of reliability (see the context/task work of Borman and Motowidlo). Having a more reliable DV would increase validity accuracy substantially.
    3) There should be a strong relationship between the construct measured and the performance rated. Just to take a measure of G, for example, and attempt to find a correlation with job performance is like searching through a dark room for furniture. Sometimes you find it and sometimes you don’t. Better to use a performance criterion that is directly related with abstract and numerical thought. ‘Problem is, those types of criteria are exceptionally difficult to find. Anyway, the better (and more related) the criterion, the better the validity. This can often be discovered through a thorough job analysis.
    4) Recognize that differential validity is alive and well. The number of false positives and false negatives may not always be equal. One may have higher correlation than the other. Combining both in the same study will decrease the strength of the correlation. (A sensitivity analysis graph will point this out). For example, if you examine tests of customer service, sales ability, etc., you will probably find that low scores have higher predictive accuracy (true negatives)than high scores (true positives). That is, you can probably trust the results among dull subjects, but, be unable to separate “book-smart” subjects from “actual smart” high scoring subjects.
    5) It always helps to realize that a “symbol” of performance will always be less robust than an example. There is a reason why, instead of relying on test scores, pilots are asked to fly simulators and why people are asked to take driving tests. They help reduce errors because they have higher fidelity (and validity) to critical elements of the job.
    6) It is a good idea to use tools that were designed to predict performance differences –not communication or personality styles. There are too many studies using generic instruments that have no foundation in job performance theory. Nice effort, but wrong direction.
    7) Finally, I have found that test design has a great effect on validity. Ipsative-design instruments seem to be more sensitive than Likert designs (even with all the inherent problems of designing and developing an ipsative test).

    Well, you asked. That’s my 2 pence worth.

    Wendell

    You can read the original article at:
    http://www.erexchange.com/a/d.asp?cid=38CB8308CC634A58AAB8AC1B8EC6C085

    Post your own Article Review
    http://www.erexchange.com/p/g.asp?d=M&cid=38CB8308CC634A58AAB8AC1B8EC6C085

Leave a Comment

Your email address will not be published. Required fields are marked *