Thoughts on the Ricci Decision

It has been an interesting week as I have watched issues that I deal with on a daily basis become part of the mainstream news media. For those of you who are unaware, earlier this week the Supreme Court handed down a ruling in a case that deals with discrimination and employment testing. This case is highly relevant to what myself and other I/O psychologists do, and its complexities do not surprise me at all. I cut my teeth as a psychometrician for the City of New Orleans, helping to create and validate police and firefighter testing. I can say with confidence that, when it comes to test development and validation, public service testing carries with it by far the most potential for litigation. There are many reasons for this, all of which seem to hinge on the promotion (or lack thereof) of those in a protected class (e.g., minorities) over those in non-protected classes.

A complete discussion of the intricacies and technicalities of validation, discrimination, adverse impact, and differential prediction is beyond the scope of the words I am writing today. Suffice it to say that this case has placed competing priorities in the use of testing in the spotlight. These competing priorities are using fair testing while striving to eliminate discrimination in hiring. While title VII of the Civil Rights Act of 1964 has attempted to provide some guidance in relation to these competing goals, the Ricci case has laid bare some critical issues that in my opinion certainly call for the government to re-evaluate and modernize the standards it has set.

We are mandated to use valid tests. Valid tests can often lead to minorities being hired at lower rates than those of other races. This is seen as OK as long as the test has been validated, because in theory this means the test is job-related and job-relatedness is the standard by which the legality of testing is determined.

However, what are we to do when sticking to the use of validation — as we have been asked to do — creates a situation that actually inhibits the goal of ensuring diversity and fairness? This has been a thorny issue for those of us in my profession for a long time. There is no magic bullet. The dissenting opinion in this case led by Justice Ginsburg rallies around the idea that the spirit of diversity and fairness should be the highest standard to which we aspire in hiring. It is hard to argue with this point … except for the fact that there are technical issues which can stand in the way of our achievement of this goal.

So, what does all this mean for hiring in the corporate world? I offer my humble answer to this question as follows:

Don’t Panic –– Police and fire testing is the most highly scrutinized type of testing known to mankind. Don’t panic based on the results of this case. Do use this as a time to think about your use of testing and where it may leave you exposed.

Article Continues Below

Validate, validate, validate –– In this case the validity of the test was upheld. In my mind the validity of the test, while an issue, was not the main issue at hand. The only reason the city tried to throw out the test was because it ended up being counter to its goal of diversity. Despite this, I cannot stress enough the need to validate all testing that is used to make employment decisions. It is the cornerstone of best practices in testing and provides the documentation you will need should you find yourself in court. Without such documentation, you are toast! As an added bonus, validation is the process that provides awareness of issues such as adverse impact. You may not even know you have a problem unless you take the steps to validate. Remember, ignorance of the law is no excuse!

Look at the bigger picture –– I agree with Justice Ginsburg that the overall goal of eliminating discrimination is the highest standard to which we should be held. In the corporate world this becomes an issue of fairness in hiring practices across the board. One of the biggest ways to guard against problems while working to achieve diversity is to look at the demographics of your workforce vs. those of the available workforce in the area. If these do not look about the same, you have a problem. This problem can be rectified by actively recruiting for diversity. Diversity training programs are OK, and of course I support them, but the best thing to do is to put your money where your mouth is and be aware of your demographics and seek to hire diversity at all times.

Seek out testing that has been shown to reduce adverse impact –The Uniform Guidelines on Employee Selection Procedures pretty much lay down the law when it comes to testing. A key part of this doctrine is that one should always seek out tests that are known to have less adverse impact. We know that cognitive tests have the most adverse impact while also providing the best predictive accuracy (i.e., validity). Resolving this conundrum remains the crux of the issue, with the Ricci case as firefighter tests are highly cognitively loaded. In the real world I feel this issue is best addressed via awareness of what is required for the job and by seeking out selection procedures that we know can test cognitive traits while displaying lower levels of adverse impact. If you guessed that I was going to recommend simulations as the best way to accomplish this goal, you are correct! The issues of this case are yet another piece of evidence that clearly demonstrates the value of simulations over more traditional types of testing.

I look forward to the discussion that my opinions generate and I am glad to see my corner of the hiring world getting its brief exposure in the national media spotlight. I certainly hope that the awareness generated should serve as a catalyst for change.

Dr. Charles Handler is a thought leader, analyst, and practitioner in the talent assessment and human capital space. Throughout his career Dr. Handler has specialized in developing effective, legally defensible employee selection systems. 

Since 2001 Dr. Handler has served as the president and founder of Rocket-Hire, a vendor neutral consultancy dedicated to creating and driving innovation in talent assessment.  Dr. Handler has helped companies such as Intuit, Wells Fargo, KPMG, Scotia Bank, Hilton Worldwide, and Humana to design, implement, and measure impactful employee selection processes.

Through his prolific writing for media outlets such as ERE.net, his work as a pre-hire assessment analyst for Bersin by Deloitte, and worldwide public speaking, Dr. Handler is a highly visible futurist and evangelist for the talent assessment space. Throughout his career, Dr. Handler has been on the forefront of innovation in the talent assessment space, applying his sound foundation in psychometrics to helping drive innovation in assessments through the use of gaming, social media, big data, and other advanced technologies.

Dr. Handler holds a M.S. and Ph.D. in Industrial/Organizational Psychology from Louisiana State University.

LinkedIn: https://www.linkedin.com/in/drcharleshandler

 

 

 

 

Topics

10 Comments on “Thoughts on the Ricci Decision

  1. My excitement at seeing our field in the spotlight has been tempered by the poor quality of journalism that has reported on the decision. I’ve seen the decision described as putting a nail in the coffin of civil service testing, a mandate against written multiple-choice tests, and as a new legal theory against discrimination.

    This decision changes nothing. Since the Griggs decision, adverse impact against a protected group (not necessarily a minority group) is illegal unless the employer can demonstrate the test was job-related and consistent with business necessity. This decision doesn’t change that. It also doesn’t imply anything about using–or not using–written tests. It doesn’t change a legal burden. The only thing it does, as you point out, is reiterate the importance of validation. And perhaps underscore how complex decisions regarding selection can be.

    Let’s all remember that a written multiple-choice test is not the only kind of test. Anything you do to narrow down your candidate pool is a test. “To test or not to test”, that is not the question. “How to test”–ah, therein lies the rub.

  2. Bryan – Well said… and I’d like to beat this dead horse one more time. Regarding the Ricci decision you state, “The only thing it does, as [Dr. Handler] point[s] out, is reiterate the importance of validation. And perhaps underscore how complex decisions regarding selection can be.”

    I don’t think its the “only thing” it does. What it also does is turn our attention away from the crux of the issue. The validation of the tests used in this case, developed at a cost of tens of thousands of dollars, wasn’t questioned. What was clearly questioned is the use of a standard “80%” mathematical formula that the federal government feels is the magic number that ultimately waves a bright red flag screaming “Sue me!”. This is THE reason that the city refused to acknowledge the results and this is THE reason the situation cannot be resolved. Reliable, valid, research-based testing versus the 80% math formula.

    As my Statistics Professor told us in grad school, “Go figure.”

  3. Excellent article, but help me understand the practical impact of the test.

    1. Was the exam 100% of the decision to promote?
    2. If not, what other factors were considered?
    3. Since this was a “cognitive” exam; what critical elements were being measured? Certainly not whether people can read.

    My ultimate question is, “why such a high non-pass rate of non-white candidates?” What exactly were the questions that were most difficult to answer? Since the test was Valid–the other questions need to be addressed.

  4. Dave – I didn’t get the feeling that the 80% rule was questioned by the courts, in fact they refer to it several times as being the accepted standard. Maybe I’m mis-understanding and all you meant was this was a major issue, which I would agree with insofar as it was important to the City. But I don’t think this case changes anything about burden shifting as far as adverse impact cases go.

    CB – I don’t believe the exam scores guaranteed a promotion, if that’s what you’re asking. I don’t believe the decision goes into detail about other factors, but perhaps one of the earlier decisions or filings does? Check out http://tinyurl.com/rdscotus. With respect to your last question/issue, “job related” knowledge was measured using source texts presumably dealing with firefighting. Minority candidates almost always score lower on cognitive-heavy exams and there are all kinds of theories about why. Check this out as an example: http://appliedpersonnelresearch.com/papers/adimpact.pdf

  5. CB

    Excellent questions. I am not sure but I do believe the exam was the only thing related to promotion, if not they were a big part of it since one could not be promoted if they didn’t achieve at least a threshold score that allowed them to pass.

    The exam itself, which I did not see, has its main cognitive basis in reading comprehension and job knowledge. The reason for the differences in such tests between races is the subject of much debate. Some have even taken a stand that it is due to immutable factors in the cognitive structures of different races. Please understand that I do not take this view. We dont know the reason for sure but it seems to boil down to the fact that the less reading and the more real life interaction, the lower the adverse impact.

    I dont think anyone can answer for sure the reason for the low pass rate for minorities. I havent seen the test itself so I cant comment on the technical aspects of it. At the city of New Orleans, while under an EEOC consent decree, we used a strategy called banding in which the standard error of measurement of the test was used to create groups of scores so as to force a normal distribution (i.e., bell curve) of scores. This technique assumed that all persons within a score “band” could be considered as having equal scores. This worked great for the tail ends of the bell curve but the middle of the curve had a huge number of folks and the city could then use its own judgment and fairness requirements to choose one person over the other. Race was the most common thing used to make these decisions. According to the EEOC and the government, this was OK. In reality it inevitably pissed off whomever was passed over. the complexities of this stuff tend to always leave someone feeling like they got the short end of the stick, just as with Affirmative Action programs that denied members of one race who were more qualified access to important things such as university admisions.

    We live in an imperfect world and all we can do is continue to acknowledge the issue and try to find ways to manage it and eliminate it.

  6. The critics of this test, such as Supreme Court Justice Ginsburg, have failed to identify a single SPECIFIC question on this test that was biased in any way. Yet, Ginsburg said the test was “flawed.” So which question(s) were “flawed?” How were they “flawed?” Why were the alleged “flaws” only affecting Blacks and no one else (such as Hispanics and People with Disabilities)? And most importantly, where is the scientifically valid evidence to prove these “flaws” exist in this test?

    The critics of the test have been unable to answer these questions. This leads to the logical conclusion that the critics of the test, including the Judges who voted against the majority SCOTUS opinion, are only interested in a political agenda, and not what is fair and certainly not what the US Constitution demands (protection of EVERYONE’S Civil Rights, even a white guy with a disability).
    Did anyone find out how many hours were spent studying by the Blacks who took the test and compared that to the people who passed it? Maybe the answer to what happened is there,
    OR it may be here:
    http://bit.ly/ZqD2c

Leave a Comment

Your email address will not be published. Required fields are marked *