Dealing with Assessment Disconnects

Those of you who have followed my articles over the years know that I have devoted a significant amount of time to discussing the benefits of pre-employment assessment tools. I generally take an extremely positive stance on the value of a well-planned and implemented assessment strategy. While I firmly believe in the tremendous added value that is provided by assessment tools, believe me when I say that using them is not always a bed of roses. I have run into a variety of situations in which assessment tools seem to be doing more harm than good for the organizations that are using them. The purpose of this article is to discuss some of the recurring problems I have experienced over the years, while providing some encouragement that these issues are all natural growing pains and should not cause anyone to yell “abandon ship.” I also hope to provide a perspective on root causes for such problems, as well as some pointers for how to proactively avoid them.

Assessing Temporary Workers

Problem: “We use an assessment for one of our positions that often requires temporary workers to be hired. In many cases, we attempt to hire some of the better temps who have been doing the job well, but these individuals fail the assessment. What gives? If they have shown that they can do the job, why can’t they pass the assessment? And, can we still hire them anyway?”

Response: This is a very tough situation to simply explain away. After all, isn’t doing the job well the ultimate goal and the reason we use assessment in the first place? There are many possible reasons for this situation. First of all, it is entirely possible that the assessment being used is simply not predicting the key competencies, abilities, etc. that are actually required to perform the job. In many cases, assessments have been implemented without first stopping to collect a thorough understanding of what skills are needed for the job. If one does not ensure that the assessment is job-related, it might end up making predictions which just aren’t accurate. A second explanation is that, while a thorough understanding of job performance criteria was used in setting up the assessment, these criteria have changed since the system was implemented. In either case, the most likely reason for this problem is simply that the assessment is not measuring the skills needed to perform the job. In this situation, it’s not surprising at all that people who can do the job aren’t passing the assessment.

The solution: In this case, the assessment isn’t really doing anything in terms of predicting job performance. Go back and take the time to obtain a thorough understanding of what skills are necessary to perform this job now in addition to what changes seem likely in the near future. Once this is complete, go back and see how closely the various aspects of your selection system are aligned to measure the critical job constructs you identified. The results of this exercise should show if there are any gaps between what the assessment is measuring and what it takes to do this job. Correcting the situation requires changing the assessments you are using to ensure they are measuring the key constructs required for the job. So, what about the question of whether you can actually hire those people who are doing the job well but failed the assessment? This is a tougher question to answer. In the most technical sense, it’s important to ensure consistency throughout the hiring process. This means that the same rules need to apply to everyone hired for the job. On the other hand, job performance is the ultimate litmus test when hiring. If the person has clearly shown that he can do the job, it seems hard to deny him this opportunity. My advice here would be to clearly document the fact that the temp has been performing the job well and keep him on only if you are taking action to rectify the problem with the assessment you are using. In this case, you have covered yourself by collecting evidence that he is performing well while proactively fixing the problem that led to this situation in the first place.

Assessing Technical Abilities

Problem: “We use both an assessment tool and a technical interview for one of our positions which requires some specific technical knowledge. We use assessments to test for fit, not for technical knowledge. We rely on the interview to help us evaluate candidates’ technical abilities. We are finding that applicants who have strong technical knowledge and are doing well in the interview are not passing the assessment. This creates a problem for us because we really need to have workers who have these specific technical skills and applicants with these skills are really hard to come by. What can we do to overcome this problem? We’re thinking about just scrapping the assessment part of the hiring because it is really making things harder for us.”

Response: This situation is relatively common, especially with high-tech jobs. It is natural to think about removing the obstacle that is preventing you from hiring the people you need. However, this is not always the best solution. We’ve all been in situations where we’ve had the skills it takes to do the job but are unhappy due to a poor fit with other aspects of the position, such as the culture in which we are asked to work. Thus, ignoring this aspect may not represent a good long-term solution. This issue may actually be due to your recruiting process and the sourcing that is used to feed it. If you are not taking the steps required to help candidates understand your corporate values, and if you aren’t actively looking for candidates in places where you feel the value-match may be strong, you are unlikely to get those folks into your applicant pool in the first place. This is always a danger when focusing on the technical aspects of a job.

The solution: There are several things you may want to look at in this situation. First of all, verify that the assessment you are using really is measuring what it takes to do the job. This is the exact same situation as in the first problem discussed above. If you identify a disconnect, then it may be wise to rethink the values assessment you are using. Perhaps you can identify an assessment that does a better job of measuring the values that are really important to you. If the values you are using as hiring criteria are indeed valid ones, I would then look at the sourcing strategy being used to evaluate its ability to deliver folks who have these values. If you aren’t getting folks who have these values to apply, you won’t be able to hire them. It’s that simple. While this goes beyond the actual use of assessment, it should provide a very good example of how inter-related all aspects of the hiring process are. An assessment alone won’t do the trick; its ability to add value is directly related to the candidates that feed into it.

Assessing Legitimacy

Problem: “We really want to use assessment tools, but we are worried that allowing applicants to complete the tests in their homes may cause problems. How can we be sure applicants aren’t cheating and that the actual applicant is really the one taking the assessment?” Response: This legitimate concern is one of the most common arguments against testing that I hear. It is very scary to think that there are applicants out there who are either having others take assessments for them or who are attempting to cheat on assessments they are taking. There has been a good amount of research on this issue and most of it has identified the fact that cheating is not really an issue that negates the positive impacts of the use of assessments. That being said, no one wants to be in the position of hiring someone who looks to be able to perform the job and then fails miserably. The good news is that there are a variety of things that can be done to help reduce the likelihood that a bad hiring decision will be made due to hocus-pocus when taking the assessment.

Article Continues Below

The solution: Several things can be done that will allow an organization to use assessments to make good hiring decisions without letting a few bad apples spoil the lot. The first is to make a choice to avoid remote, unproctored testing altogether. While this is an option, it does not allow you to take advantage of the benefits of the Internet to reach a wide range of candidates, and it is hard to execute when using assessment as a screen for high-volume positions. So, if you really want to use unproctored testing as a screening tool, here are some things that you can do to help stack the deck in your favor when it comes to eliminating cheating. The easiest thing to start with is the messaging that surrounds the assessment. I believe in letting the candidate know that cheating on the exam won’t help her at all. The most effective way to do this is to help her understand that a bad job fit does not benefit her. Pretending to be someone you are not will lead to problems down the road and probably won’t benefit her career one bit. Also, you may want to inform applicants that you will be verifying the information you are collecting later on during the process so that it is likely that cheating will be detected. After working on the messaging for the process itself, the second layer of defense is to try and create some opportunity for verification of information collected remotely during the process.

For instance, it is almost always the case that candidates must do some work in-person before being hired. One way to obtain verification is to add a second, deeper set of assessments during this on-site part of the hiring process. This will allow hiring personnel to examine both sets of results for inconsistencies. This fits with the idea that hiring decisions should be made based on the results of multiple pieces of data. Those making decisions shouldn’t place too much weight on any one piece of data (e.g., remote test scores) but rather look for trends across all of the information collected. This may require a bit of redundancy, but remember the saying, “an ounce of prevention is worth a pound of cure.” This is especially important to remember if you opt not to use a second set of assessments during an on-site visit.


These real world issues represent only a small fraction of the potential issues that can make using assessment difficult. Despite these types of issues, the important thing to remember is that using assessments is, and never will be, the perfect solution to making good hiring decisions. In reality, it is what it is: another way to get data that will help those making decisions do so more accurately and more consistently. With that in mind, most troubles using assessments can be lumped into three major types of issues:

  • Type 1 is using assessment without first making sure you have a clear understanding of what it takes to do a specific job. In this situation, you’ll end up assessing for irrelevant traits, a situation which will end up making the assessment worthless. The moral here is to not put the cart before the horse by just throwing any assessment out there and thinking it will work for you.
  • Type 2 involves ignoring the fact that you can’t hire those people with the traits you desire if you don’t recruit them and compel them to apply for a position. Don’t blame assessments for keeping you from hiring people with the traits you seek until you take a look at whom you are assessing. An assessment can’t magically give candidates traits they don’t have and you can’t hire for traits that aren’t represented in your applicant pool. So, effective assessment requires a sourcing strategy that will provide the raw material required for success.
  • Type 3 is related to the belief that one assessment will function as some sort of flawless oracle, providing all of the information one needs to make all hiring decisions. In reality, assessments are only one data point that should be kept in perspective relative to other information that one collects from an applicant.

The best decisions anyone can make are informed decisions, and therefore, it is critical to balance the data collected during hiring to help ensure you are getting the real story from applicants, be they cheaters or not.

Dr. Charles Handler is a thought leader, analyst, and practitioner in the talent assessment and human capital space. Throughout his career Dr. Handler has specialized in developing effective, legally defensible employee selection systems. 

Since 2001 Dr. Handler has served as the president and founder of Rocket-Hire, a vendor neutral consultancy dedicated to creating and driving innovation in talent assessment.  Dr. Handler has helped companies such as Intuit, Wells Fargo, KPMG, Scotia Bank, Hilton Worldwide, and Humana to design, implement, and measure impactful employee selection processes.

Through his prolific writing for media outlets such as, his work as a pre-hire assessment analyst for Bersin by Deloitte, and worldwide public speaking, Dr. Handler is a highly visible futurist and evangelist for the talent assessment space. Throughout his career, Dr. Handler has been on the forefront of innovation in the talent assessment space, applying his sound foundation in psychometrics to helping drive innovation in assessments through the use of gaming, social media, big data, and other advanced technologies.

Dr. Handler holds a M.S. and Ph.D. in Industrial/Organizational Psychology from Louisiana State University.







2 Comments on “Dealing with Assessment Disconnects

  1. Here is a really honest outlook on Assessment Tests.

    Thanks! Really good article!
    karen M

  2. Great Article,
    We’ve taken our assessments to another level, the company we use also offers management assessments so that we can ensure we are establishing work groups which complement both candidates and management workstyles and skills. We all know the feeling when we find a terrific candidate but their work styles didn’t work with a particular management style. Assessing both managers and candidates can help establish healthy, synergistic relationships that are successful for the long haul.

Leave a Comment

Your email address will not be published. Required fields are marked *