Sometimes people blow off things they don’t know. Confusing correlation with causation is a prime example. You might be thinking, so what? So let’s clarify the consequences of this confusion by using an example. Let’s say your organization’s president is getting a massage with one of his buddies (or they are giving each other a massage, no matter). Anyhow, the buddy tells him about this hiring test they are using. It’s great! Forgetting for a moment that neither person has any expertise in professional testing and their conversation is dominated by a complete absence of supporting facts, let’s examine what’s wrong with using a test based on hearsay information. Correlation Do blue eyes cause blond hair? Does wearing a skirt make you female? Does playing sports make you male? These are all examples of correlations ó that is, cases where one factor tends to be statistically associated with another. Men who wear skirts might be considered strange (outside of Scotland), but that does not “cause” them to be female any more than the ladies’ underwear they wear in board meetings. Correlations can run from a perfect negative (as one factor increases, the other decreases) to zero (one factor has no relationship to the other) to a perfect positive (as one factor increases, the other increases). The important thing to remember about correlations is that although they do have a statistical relationship (i.e., a correlation coefficient), correlated factors do not cause each other! Causation Does eating more food than your body can burn lead to obesity? Do math errors lead to tax audits? Does spitting on a big ugly guy at a biker bar lead to abundant pain and suffering? Most of the time the answer is yes to these questions. These are causations ó that is, cases where one factor causes another. Here’s where it gets complicated. Causations are also reported in terms of correlation coefficients. They have the same kind of positive, zero, and negative movements. It can be confusing, but the important thing to remember about causation is that one factor causes the other. Back to the Organization Now, suppose we take that “special test” the two massage buddies discussed and start using it for hiring. Suppose, further, that we actually took the time and effort to do a statistical analysis comparing its scores with a legitimate measure of job performance. Then suppose we even found a correlation. How can we be sure whether we are dealing with correlation or causation? In hiring, we want to know if a test score predicts (i.e., causes) performance. We don’t care about correlations. In fact, if we don’t know the difference, we will make expensive decisions based on worthless data (“Hey, honey, will it rain today?” Answer: “It’s 72 degrees, dear.” See the problem?) There are many more tests in the market developed for training than there are developed for hiring. Some of them are decent. Some are worthy of wrapping fish. But the user won’t know whether she should be wrapping fish or predicting performance unless she wades through a technical manual that outlines how and why the test was developed. This is often a problem, because aggressive training-test publishers don’t have to live with the consequences of their product. They are not legally liable for test misuse; they would like to sell all the tests they can; they seldom develop a technical manual; and most know dangerously just enough about hiring to get employers in trouble. You find this a common problem when using communication tests, general personality tests, acronym-named tests, leadership-style tests and the like in hiring. These tests point out style differences that may or may not help a jobholder improve performance, but they are usually more correlational than causal. Bottom line: Scores don’t always predict job performance. Dynamite the Pond? Some folks use dynamite to fish. It’s unethical, but they light a stick of dynamite and throw it in the pond. The shock and explosion stuns fish unlucky enough to swim over to investigate the splash. As dozens of little shiny bodies rise to the surface, the “fisherman” scoops up the edible ones and discards the rest. This is an analogy for using the wrong kind of test ó it is unethical because it affects everyone indiscriminately. Dynamite tactics are bad practice, either in fishing or in hiring. A test developed for hiring MUST be based on a sound hiring theory; otherwise it may be unfair to applicants and not an accurate predictor of job performance. I don’t care what the president says, unless he and his buddy are more than “friends,” he probably has the same objective you do: high producers, few problems. Know the difference between correlation and causation. Help stop silly test-use nonsense that erodes HR credibility. Want to be thought of as a professional? Either learn more than the boss does about testing or join a massage group.