2005 Online Screening and Assessment Survey Results, Part 2

article by Mark Healy & Charles Handler

Organizations have been deploying pre-employment tests and assessments on computers for at least two decades. Originally just pencil-and-paper tests adapted to a computer interface, modern hiring and employee development tools have surmounted difficulties with data capture and reporting, user access, and database management to bring a new level of technical sophistication to recruitment and hiring programs. Now, six years into the 21st century, human resources and recruiting departments are keenly focused on the dispersal of postings, applications, and assessment tools to job applicants via the Internet. This revolution in the way companies recruit and hire people is moving quickly. So for the third straight year, I’ve asked users of online screening and assessment technology about their current and future use of web-based hiring systems.

This year we were able to collect data from 90 hiring professionals. The data offered us an excellent snapshot of current organizational strategies, concerns, and insights around the use of technology based screening and assessment tools.

In Part 1 of this article series on our findings, we detailed current usage rates and feelings about screening systems such as online job applications, applicant tracking systems (ATS), qualifications screening, and resume scanning. Moreover, we investigated trends in tracking the effectiveness of these tools, and how organizations could do a better job of understanding the systems they have in place. Here, in Part 2, we dive deeper into automated hiring processes by considering the current and future use of online assessment tools.

Assessment vs. Screening

Although definitions vary, the term “assessment” has been distinguished from “screening” in today’s human resource environment. For the purposes of our survey — and in alignment with this trend — we defined online assessment tools as: “Scientifically-based screening tools that look more deeply into a candidate’s abilities, interests, and skills. These tools include personality measures, cognitive tests (i.e., verbal and quantitative skills), situational judgment tests, job simulations, etc. These tools are typically used for a more in-depth evaluation later on in the staffing process.” In other words, assessment tools evaluate job candidates much more thoroughly than prescreening devices, and are often used once an initial hurdle or multiple hurdles have been successfully passed by applicants. Assessment tools tend to evaluate the level or quality of a skills and abilities that are essential for job performance rather than the extent to which an applicant’s experience qualifies them to perform a specific position. As in past years, our survey revealed some very interesting information about the use of assessment tools. The remainder of this article provides a summary of these results.

The Use of Assessment

Of our 90 respondents, 68 companies (76%) utilize assessment tools — either online or in traditional paper and pencil format. Of these, 50% deploy one or more tools online, either in a proctored environment or over the Internet for access from any web-enabled computer. Organizations surveyed varied quite a bit by size, but this — and number of hires per year — had no bearing on whether or not a company used assessment tools. This is in contrast to last year’s sample, when organizations with 5,000 or more employees used assessment at much greater rate than smaller companies. The ubiquity of applicant tracking systems has engendered a slight increase in the rate of companies who integrate assessments into these common HR platforms. Specifically, 22% of companies using assessment integrate candidate results into their ATS, compared with 13% last year. This generally low rate of assimilation (under 25%), however, is not really surprising; although use of an ATS has become standard in most large HR operations, the full capabilities of such systems are not typically utilized. The breadth of assessment use throughout an organization varied quite a bit across the study as well.

As can be seen in Table 1, the typical organization deploys assessment tools as part of the hiring strategy for only a subset of jobs. Only 19% utilize online assessment for all jobs worldwide, and an additional 20% use assessment for all domestic jobs. These results are similar to last year’s findings, where 37% of organizations used assessment either world- or country-wide.

Table 1. How is online assessment deployed in the organization?

All jobs within a business unit, but not all business units 4%
All domestic jobs 20%
All worldwide jobs 19%
Specific local jobs only 41%
Not sure 17%

For another angle on the extent of assessment implementation, we asked respondents to indicate the level of jobs for which assessment is used for evaluating candidates.

Table 2. Use of online assessment at different job levels.

Entry level/hourly 46%
Lower level management/professional 49%
Middle level management/professional 43%
Higher level management (director) 37%
Executive/vice president 26%
Other 7%

The results of this year’s survey suggest that this is one aspect of online assessment that has evolved. The in-depth evaluation and comparison of pre-qualified candidates using online assessment tools is no longer relegated to entry-level positions or technical professionals. At only the senior leader level does use remain under 30%; this is not unexpected, given the one-on-one, grassroots nature of executive recruiting and promotional systems. Unlike job level, rates of use for different types of job settings appear to vary considerably. Table 3 reveals these distinct trends.

Table 3. Use of online assessment by job setting.

Customer service 41%
Manufacturing/labor 12%
Skilled trades 35%
Account management 34%
Call centers 35%
Managerial/supervisory 46%
Administrative 35%
Information technology 43%
Retail 12%
Consulting/advising 19%
Sales 35%
Professional 46%
Other 26%

These results reflect not only the sample of survey respondents, but a certain level of maturity in the development and implementation of assessment in the early 21st century. For example, use of online assessment tools for managerial, IT, and customer service hiring is common in part because so many different cognitive ability tests, personality inventories, and other in-depth assessments have been developed specifically for these sorts of jobs. Moreover, validated assessments of key skills and competencies required for success in these roles have been available for years in paper-and-pencil format. Not insignificantly, organizations tend to imitate others in their industry, so a bit of a “snowball” effect is reflected here as well. Completing an online assessment has become relatively standard for customer service, managerial, professional, and IT jobs, but manufacturing and retail hiring situations are still commonly handled via face-to-face evaluation of local, “walk-in” applicants. Online assessment of these candidates is still in its novel phase of evolution.

Types of Assessment Tools in Use

The rates of usage of various types of tools by organizations employing an assessment strategy provide insight into the growth of this area, especially when compared to our previous surveys. Although this year’s sample of people professionals represents a largely different group of individuals compared to last year, the substantial increase in the adoption of most types of tools is difficult to put aside. Even more telling, Table 4 displays the wide disparities among the rates of use of different types of assessments.

Table 4. Rates of adoption of common assessment tools.

Type of assessment 2005 2003-04 2002
Personality measures 34% 30% 21%
Assessment of “fit” with company 35% 27% 29%
Cognitive ability 46% 27% 26%
Biodata 15% 10% 14%
Skills/knowledge 53% 40% 12%
Background investigations 54% 30% 31%
Simulations 18% 14% 10%
Online interviews 15% 4% 19%

In line with previous surveys, assessments of background, specific skills and knowledge, and cognitive ability are becoming routine. However, this year’s results signify a substantial jump in the level of adoption of these instruments. Biodata inventories, online interviews, and simulations continue to be less often used. Moreover, the most common types of assessments realized the largest gains in usage — particularly skills/knowledge assessments, background investigations, and cognitive ability tests.

Effectiveness of Assessment Tools

With the wider implementation of these tools, are companies finally realizing the value of structured, validated assessments to their people strategy? Table 5 displays overall responses to, arguably, the most important question surrounding the implementation of any people strategy or tactic. As can be seen below, whether an organization actually attempts to answer this question is critical to the eventual answer.

Table 5. Perceived effectiveness of assessment vs. collection of metrics.

Does assessment have a positive impact on your organization? Yes No Not sure
Overall 70% 7% 22%
Organizations collecting metrics 88% 0% 12%
Organizations who do not collect metrics 64% 16% 20%

As with previous years, and with just about any people program or strategy, less than one-third (specifically 28%) of the companies who use assessment formally collected metrics to determine if their assessment strategy adds value to their organization. Of those that do, there seems to be strong confirmation of the value of assessment. Those who do not, in some cases, have determined that assessment does not positively impact their company, even if they have not directly measured its effect. These results match those for online screening tools, as detailed in Part 1 of this piece. Since the collection of metrics does seem to contribute to perceptions of value to the organization, we also asked if the collection and presentation of metrics help to make a business case for the continued use or expansion of assessment tools. According to Table 6, the majority of companies collecting metrics are able to use them to build a business case for the use of assessment.

Article Continues Below
Table 6. Do metrics help build a business case for assessment?

Yes 71%
No 11%
Not sure 18%

An evaluation of a company’s hiring system — or just parts of it — may involve a variety of different metrics and measures that consider “success” from a variety of angles. We asked survey respondents to indicate the metrics they use to evaluate the impact of their assessment systems. Typical measures include:

  • Comparisons of assessment scores versus tenure and turnover
  • Traditional validation studies, including statistical analysis of the relationship between prescreen and assessment scores and job performance measures
  • Adverse impact and diversity indicators
  • Standard HR metrics, such as time-to-fill position and cost-per-hire
  • Formal and informal impressions and opinions of managers, HR and recruiting staff, and job candidates

These vary in terms of effort required and the time span required for data collection. Still, these strategies may be systematized and applied to all hiring strategies in the organization, and people professionals can easily build expertise in this area as use of metrics permeate their hiring processes.

The Future of Online Screening and Assessment

Despite the expansion and pervasiveness of online screening and assessment, many organizations have not yet begun their implementations, or even their purchase, of these tools. However, of those companies not currently using screening or assessment instruments, nearly three-quarters of them (73%) feel their organization will adopt them in the future. For these potential adopters of online hiring technology, Table 7 summarizes the sorts of tools under consideration.

Table 7. Screening and assessment tools under consideration.

Type of screening/assessment Percent considering use
Resume scanning tools 26%
Qualifications (experience, education, etc.) 46%
Personality measures 23%
Assessment of “fit” with company 34%
Cognitive ability 17%
Biodata 11%
Skills/knowledge assessment 60%
Background investigations 34%
Simulations 17%
Online interviews 23%
Don’t know 6%

Consistent with previous surveys, and with the large market space filled by the many competitors for your business, qualifications screening, assessment of cultural fit, skill/knowledge assessments, and background investigations dominate the interest of companies seeking to dip their toe into the ocean of online hiring tools. Companies will consider, compare, and purchase these systems using a variety of techniques. Table 8 details these methods for selecting recruitment and hiring technology.

Table 8. Method of comparing and purchasing assessment tools.

Shopping and purchasing method % using method
Formal RFP process 26%
Informal decision making 18%
Recommendation from consultant 6%
Via partnerships/services from existing vendors 16%
Don’t know 26%
Other 8%

Clearly, few trends emerge, with formal requests for proposal (RFP) the most popular — but by no means majority — decision-making tool. So what is keeping many organizations from adopting online screening and assessment tools or expanding their role in recruiting and hiring?

Obstacles to the Use of Online Screening

Greater usage of online prescreening and assessment would occur if it wasn’t for the obstacles and limitations perceived by organizational leadership and people professionals. These anxieties ó whether grounded in reality or not — present real barriers to the growth and enhancement of hiring success in every type of organization. Presented in order of endorsement, Table 9 highlights the concerns held by organizations seeking to adopt hiring technology.

Table 9. Percentage of respondents noting concerns and obstacles to the adoption of online hiring tools

Too costly; lack of budget resources 27%
Lack of knowledge in organization 26%
Skepticism about the ability of screening to provide results 22%
Decision makers do not believe it is worth the cost 17%
Hesitation due to legal issues 12%
Hesitation due to security issues 11%
Tools will negatively impact the candidate experience 11%
No obstacles 9%
Technology is still too new 9%
HR not interested in innovation; reluctance to change 8%

Another way to consider these feelings is to look at the “single biggest obstacle” perceived by survey respondents. For the most part, the data in Table 10 reflect the results in the previous table.

Table 10. The single biggest obstacle to the adoption of prescreening and assessment?

Too costly; lack of budget resources 19%
Skepticism about the ability of screening to provide results 16%
No obstacles 14%
Tools will negatively impact the candidate experience 11%
Technology is still too new 11%
Decision makers do not believe it is worth the cost 7%
Lack of knowledge in organization 7%
HR not interested in innovation; reluctance to change 7%
Hesitation due to security issues 5%
Hesitation due to legal issues 4%

In general, these concerns center around:

  1. The value of spending money and time on more advanced assessment
  2. Candidate reactions to hiring tools
  3. Lack of knowledge regarding the positive implications of assessment tools

With 14% indicating no obstacles, are these legitimate concerns or merely anxieties experienced by inexperienced, paranoid, or jaded decision makers? To answer that question and others, we will, in Part 3 of this article series, delve into a deeper analysis of the overall findings and general implications of these results.

Dr. Charles Handler is a thought leader, analyst, and practitioner in the talent assessment and human capital space. Throughout his career Dr. Handler has specialized in developing effective, legally defensible employee selection systems. 

Since 2001 Dr. Handler has served as the president and founder of Rocket-Hire, a vendor neutral consultancy dedicated to creating and driving innovation in talent assessment.  Dr. Handler has helped companies such as Intuit, Wells Fargo, KPMG, Scotia Bank, Hilton Worldwide, and Humana to design, implement, and measure impactful employee selection processes.

Through his prolific writing for media outlets such as ERE.net, his work as a pre-hire assessment analyst for Bersin by Deloitte, and worldwide public speaking, Dr. Handler is a highly visible futurist and evangelist for the talent assessment space. Throughout his career, Dr. Handler has been on the forefront of innovation in the talent assessment space, applying his sound foundation in psychometrics to helping drive innovation in assessments through the use of gaming, social media, big data, and other advanced technologies.

Dr. Handler holds a M.S. and Ph.D. in Industrial/Organizational Psychology from Louisiana State University.

LinkedIn: https://www.linkedin.com/in/drcharleshandler

 

 

 

 

Topics

16 Comments on “2005 Online Screening and Assessment Survey Results, Part 2

  1. The idea that assessments cost too much is quite interesting. What do assessments cost and how much do they save? From my experience I know that an effective assessment program can cut turnover rates in half within a short time. What is it worth to you to cut your turnover rate in half? The ROI of an effective assessment program is almost always well in excess of the minimum required by the CFO. We need to know what turnover costs, what assessments cost and how much turnover can be cut.

  2. I found one specific startling statistic which
    glaringly stands out in this article:

    ‘ … less than one-third (specifically 28%) of the companies who use assessment formally collected metrics to determine if their assessment strategy adds value to their organization … ‘

    So in other words out of the 100% of companies that used testing … 72% are c-l-u-e-l-e-s-s as to whether the test is accomplishing anything!

    This would sound about right … I’d say its even higher from personal obervations … like 90%.

    This is a rather astounding testament to the successful sales capabilities of ‘Test Vendor Sales Reps’ who succeed in convincing top level corporate decision makers that their dumb-ass tests actually accomplishes anything.

    I’ve heard enough stories from the ‘trenches’ regarding these tests I can write a book someday but I must first be retired as it would preclude me from working anymore once I disclosed everyone’s names.

    After a few years of being unable to serve up one candidate that could pass the Caliper test at one insurance client … a mid level manager admitted to me privately:

    ‘We all had to take the Caliper test one day … many department heads were worried they would not qualify for the very job they’ve held for years … and since many felt they’d flunk it we obtained the answer code first before taking the test [easy to do for management]…’

    Thereby all the Team Leaders and department heads scored high … and now were judging candidates by the artificially high ‘internal metrics’ created by the cheating managers themselves !!!

    And they wonder why NO CANDIDATE can score adequately enough (Top management does not know what is going on)!!

    Another national, publicly traded client’s internal corporate and regional HR folks have revealed ‘We HATE that Caliper test (sorry to pick on you Caliper … but you’ve made my life miserable lately) but TOP MANAGEMENT bought into it and we can’t seem to change their mind’.

    There should be good reason to hate it. What used to be one hire for every 3 candidates submitted on average, has turned into one being able to pass the Caliper test for every 8 Candidates submitted (that’s just to PASS THE TEST).

    After 15 years of testing I’ve arrived at my own metric: Tossing a 25 cent quarter and choosing heads or tails accomplishes the same result.

    I’ve had candidates (for recruiters) score 85% and fail miserably in real life. I’ve had those that flunked with a 56% grade go on to become long term superstar producers for 9 years and more.

    I’ve also concluded the following:

    Testing individuals is a flawed process itself.

    Why? Because if I hire all ‘aggressive, assertive, competitive hunters’ … I will wind up with an office of backstabbing hyper competitive reps that can’t get along with one another.

    But my department is actually STRONGER and BETTER when we bring in a few ‘hunters’, a few ‘gatherers’ and others that support the two.

    I now use tests only for humorous anecdote just for comparison … in the end … It is I that will decide who I hire. Which leads me to another observation:

    How many VP’s I know have their ‘hands tied’ and controlled by outside testing services?? I’d hate to be a VP responsible for hiring, and have to succumb to hiring only those that pass a certain test! Why bother having a VP Title only to have your authority diminished?

    Pretty soon we won’t have to concern ourselves with ‘Tests’ … they just placed human brain cells in a mouse. And it won’t be long before they extract the top Sales Producers ‘brain cells’ and simply create a clone farm like any production operation.

    That way companies can ‘grow’ their human resource talent just like we now raise farm bred Salmon.

  3. In response to Frank Risalvato’s comments, my humble opinion is that assessments are a piece of the overall hiring process. Human resources professionals who utilize an assessment as ‘the tool’ to winnow the candidate pool are not using it for the intended purpose, and that is to assess the skill level of the interviewee.

    The interview process is a way for all of us (recruiters, hiring managers, peer interviewers) to get a true understanding of the person we hope to hire. Thing is, there is usually very little time allocated for interviewing. Imagine trying to make an important hire based upon a one hour interview?

    There are behavioral interviewing techniques (which I use) and assessments to ‘help’ in the decision-making process. Ultimately, the final arbiter is a human being.

    The use of assessments should never take the place human judgment. It may enhance the process, but not become the process.

  4. Frank,
    excellent points for sure. Recently informed a candidate that he was to take a Psychometric test, he started asking some very interesting questions –
    1 which test is it? (not bad)
    2 What type is it? projective, objective, idiographic and such like..
    3 What are they trying to determine?
    4 What is the main Goal
    5 What is the company Culture and so on.

    After about 20 Mins, I dug deeper, wondered why he asked so many deep questions about the company’s profile and managers (kinda duh huh?)

    Anyways he made a comment that was very interesting, he said ‘just tell me what personality you want me to be and I will be it’

    Told me a story of how his boss had told him that he needed to take a test (new policy with the company), he said the same to his boss, his boss laughed just like I did… He said it again.

    Well this candidate later proved to me and his boss that through some sales training and some classes he took (some were research in the local University) he learned how to master Psychometric tests. (He had actual physical results that he e-mailed to me)

    Now the words used is master, or excelling not fake when looking for the Public Sales Training. Knowing in advance what the company is looking for, what the culture is like and such like allows for the individuals to prepare for the tests in advance.

    What is interesting is that they are not faking, really, before they go into the tests they are able to mentally gear themselves and determine those strengths within them by remembering certain situational incidences.

    An intersting comment I saw recently ‘it is quite easy to tailor your answers so that you can appear however you want to appear. In fact, some people are even savvy enough to try to mimic certain ‘types’ on the Myers-Briggs. Additionally, by providing an ‘objective’ and nonevaluative reference for personality and style, some of these tests provide good rationalizations and excuses for one’s shortcomings when circumstances cannot be blamed. For example, one can blame a messy desk or missed deadlines on the fact that one is a ‘P’ — or a ‘perceiver’ in Myers-Briggs terminology’

    If I hadn’t seen the proof I would not have believed it for sure.

  5. Hi: Just a few observations on the commentary regarding the uselessness of testing. First, the post assumes that most organizations blindly adopt testing and never evaluate its impact. In reality, most organizations require sign-off on testing programs from their operations staff, HR professionals, employment attorneys and technical specialists, which often include industrial psychologists. These staff members focus on the utility, validity and legal defensibility of instruments before they are implemented. Contrary to this post’s implications, corporations tend to be extremely diligent in this review and these professionals actually understand testing and its scientific foundations. Bottom line, the vast majority of companies adopt testing because of its technical and legal documentation, not because they like a sales representative.

    Certainly some organizations may simply rely on a test publisher’s validity evidence and not have metrics of their own, but this practice is certainly not limited to testing. These same organizations probably rely on their interviews, drug testing, criminal background checks, etc. without using any internal metrics, while having no documented external evidence (validity studies) that these practices are predictive of important work behavior.

    From a scientific perspective, I suggest reading an article written by two of the country’s most renowned industrial psychologists entitled ‘The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings.’ This 1998 article published in the American Psychological Association’s Psychological Bulletin concludes that research shows that a combination of certain assessments is often the most valid and practical means of selecting employees. Parenthetically, it is important to note that no test, interview, job simulation, drug test or background check is going to be a perfect predictor of employee performance. As a result, there will always be stories from the trenches that an applicant’s performance on the job was exactly the opposite of what the test, interview or other form of assessment predicted or that an applicant faked out the interviewer, test or other process because it was apparent what the job required. Reciprocally, there will also be stories from the trenches that an applicant’s job performance was perfectly predicted by the test, interview or assessment. Obviously, an organization’s reliance on testing or any selection procedure is much more appropriately justified by professionally conducted and legally probative validation studies, rather than a few isolated case studies derived from the trenches.

    I trust this information is helpful in moving this discussion to being a bit more fact-focused.

  6. Karen – My point precisely.

    Your damned if you ‘do’ (be honest in which case most fail as human personalities like thumb prints are diverse).

    Or damned if you do not (be honest … in which case you can lie through your teeth and pass with colors).

    I can take the Caliper test right this minute … and blow the doors off the charts … so what?

    That would just means I know what they want to hear that’s all.

    Does that make me a good Regional Sales Manager?

    No.

    I know someone that just passed such a test (Caliper) and failed the first interview M-i-s-e-r-a-b-l-y !! I knew he was going to fail the interview … but let him go on as we had precious few candidates to thread the Caliper test needle with.

    What does that tell me? The test is useless!

    If you can score high and flunk the interview … then it means many that are many scoring low who might be GOOD CANIDATES being inadvertantly passed up!

    As Doc Williams put it to me a few weeks ago while talking by phone ‘if the test were valid every single hire scoring high on the test would be a ‘TOP PRODUCER’ … this is not the case.

    Arguing about tests is futile. It’s the equivelant of arguing which Religion worships the true god … or which Political Party is better Democrats or Republicans.

    Those of us with hands on observations and convictions stemming from 20 years of hiring experience (like me) see them as no better than a coin toss.

    Those who are in the business … such as David Arnold who works for the Association of Test Publishers … will obviously tout their usefulness as this is his livlihood.

    I went by facts and the facts point they are useless unless properly used as part of the process and unfortunately most do not properly use them based on 20 years of factual hands on observation and participation.

    With all due respect Mr. Arnold … how many individuals do YOU HIRE each year using tests??

  7. Hi:

    Following are just a few final comments I have regarding this discussion. First, contrary to the most recent post, it is ludicrous to state that ‘if the test were valid every single hire scoring high on the test would be a top producer.’ In reality, there is no silver bullet that is going to be a perfect predictor of job performance. We still continue to use interviews (appropriately so) even though not every applicant evaluated favorably by an interviewer performs well on the job. Using valid tests and/or other validated tools will not lead to perfection, but such an approach will minimize the likelihood of hiring errors.

    Second, from a pragmatic perspective, employers use multiple hiring procedures because they provide supplemental information about an applicant’s qualifications–the ultimate goal of tests and other assessments is not necessarily to confirm the results of an interview.

    Finally, I actually have been in the trenches and been involved in hiring thousands of employees, including salespeople, pilots, customer service agents, store managers, and C-level employees. Just like the extensive number of other HR professionals who utilize assessments in the private and public sectors, I recognize the strengths and limitations of hiring tools, including tests, not only based on my work in the trenches, but due to reliance on scientific findings.

    I trust these thoughts are helpful.

  8. Perhaps you could answer a question which has bothered me for years:

    What is the correct answer to the following ‘personality test’ question?

    Green is to beach as
    Purple is to:

    A) chair
    B) sky
    C) motion
    D) rage

    Please show your work!!!

  9. Here is an interesting part of an article from Team Technology that I cut and pasted – http://www.teamtechnology.co.uk/personality-tests.html

    the expression ‘personality quiz’ or ‘test’ can be somewhat misleading, because usually:

    – a quiz or test has right or wrong answers against which you are marked
    – the results reliably give you a result
    that score is objective and definitive
    – it tells you something about your ability
    – it can be used to predict how you might do something in the future

    However, none of the above apply to most personality tests or quizzes, especially ones relating to ‘personality type’, because usually:

    -they tell only tell you how different people like to approach things differently

    -they do not predict behaviour, because behaviour is often dependent on the circumstance or situation – eg: when you are driving a car, whether you change gear with your right or left hand depends on the design/layout of the car, not your handedness preference
    – they do not tell you about your ability (eg: you might prefer extraversion, but could be bad at dealing with people, or you might prefer introversion and be very good at dealing with people)
    – the scores are subjective, and can change depending on the mood/attitude/mind-set you have when completing it (sic wonder about the mood of the computer or tester as well :))
    – the results are not infallible, or even highly reliable. Eg: research shows even the best personality type questionnaires produce an incorrect result in, on average, 1 in 4 cases.

    This has important implications for you, if you are thinking of completing a personality test or quiz: you should be prepared to change the results if you think they are wrong, and not base any important judgements solely on the personality test’

    By the way there is an excellent Research Piece by the University of Delaware Education and Research Dev. Center – Testing, not an exact science –
    http://www.rdc.udel.edu/policy_briefs/v16_May.pdf

    Here are some free psych tests from psych web see how easy it is to ‘fix the tests’

    http://www.psywww.com/resource/bytopic/testing.html

  10. With all due respect David, you seem to be a prudent and wise interviewer that knows how to incorporate test results into the hiring process. That’s a good thing (As Martha Steward would say).

    Here’s the glitch: Not everyone is as wise as you or I in knowing how to incorporate tests into the decision making process.

    I cited one specific example in several posts ago (about the company who’s mid management obtained the answer template before taking the test and establishing an artificially high benchmark).

    *I will now cite yet another specific example (without mentioning names of course):

    There’s one company out there who for more than 15 consecutive years has used a certain test score to pre-establish as to whether or not to even MEET a prospective candidate for the face to face initial interview.

    In other words, despite a candidate having been prescreened by a competent executive recruiter, paid a retainer or engagement fee (incurring an expense in itself for the client), and despite the added corporate Human Resource telephone interview which usually concurs and agrees with the recruiter’s assessment … if the candidate does not then score sufficiently in 14 different categories on a certain online ‘profile test’ he/she is then discriminated (strong word but true) against and precluded from ever having her first face to face interview with the actual hiring manager.

    Is that right?

    No. Ask anyone including the testing companies themselves and they will tell you this is an incorrect procedure as it gives too much weight to the test prematurely in the process.

    The result is ‘premature eject-ulation’ of candidates that often required an entire team of recruiters (myself included), managers, researchers what could have been 5 weeks to find.

    Yet for 15 long stubborn years, they reject candidate and candidate after candidate … many among which hold esteemed, highly productive jobs with a direct competitors while being rejected by the online ‘test’.

    I don’t mind you wanting to test someone AFTER you have met the person and taken your internal Human Resources and your external executive recruiting firm’s recommendation to meet in the first place … I do have a serious issue with candidates being ejected just from test score alone.

    As I write this post today I conservatively estimate the companies I have assisted in staffing today generate well over one billion in added annual revenue from the teams we’ve helped build and departments we’ve staffed (along with others here at IRES, Inc.)

    The companies know our track record as well.
    So why would they forego the recommendation of a search firm with hundreds of millions of dollars of company-building experience for an automated test?

    Delicate probing as to who made this ‘test useage decision’ led me to discover this test is touted by very ‘high ups’ in the corporation’s home office … even though it is despised by many of their own regional HR folks who’s lives are made as difficult as the outside recruiting firm’s hired.

    The company believes it has a ‘Recruiting Problem’.

    I assure you it has no such problem as they perform better than most in attracting a pipeline of qualified talent for each job and are otherwise a highly respected organization ( I tell them this frequently).

    The one problem they do have is a ‘Incorrect Test Useage Procedure’ problem. And because they reject 2-3 times more candidates before they are ever invited for the initial interview they require 2-3 times more ‘recruiting power’ to make up for the mess the test is creating.

    Fortunately for us … we instituted an ‘Engagement Fee’ or ‘Retainer’ approach many years ago when we learn such tests are to be part of the process. This assures us if we do our job and submit a slate of 3 or 4 qualified candidates who get ejected by the test … we’ve at least received some of our compensation .

    For years we have refrained from working on searches on a pure contingency basis if it is disclosed a test will become an early indicator of a candidate’s interview progression.

    You want to pass on the candidate after you’ve met her … fine … but for God’s sake man MEET THE CANDIDATE all your manager’s and experts are telling you that you should meet!!

  11. Of course you’d say that – you?re the legal counsel to the Association of Test Publishers. You have a vested interest because seeing these tests implemented equals money in your pocket.

    In the real world, I have used tests at 4 different companies and have tested over 10,000 candidates. I can tell you without a doubt that not only did the tests screen out some of the best qualified candidates, but at every company we found that those passed the test were typically better educated. Unfortunately, too often they would bomb out on the job as terrible performers.

    Thankfully, I now work at a company who, after evaluating dozens of tests, has decided that these tests are flawed and in themselves are a form of discrimination. They’ve even thrown out degree requirements and will accept years experience instead.

    For the first time in years, I can hire based on real world skills and performance rather than whether or not someone has the right pedigree or whether or not they are clever enough to pass these tests. And guess what, of the 120+ people I hired in the last 12 months, we only lost 2 – yes TWO. The rest are still on the job and performing at or above the required levels. The loss level at companies using tests for qualifiers was always over 20%.

    In my experience, companies that use these tests are trying to cover for weak hiring practices and poor interview techniques. If you really want to test someone, hand them off to whoever they will be working for and let them spend an hour or two with their prospective teams in a real-world work environment. I guarantee within minutes you will have an accurate assessment of that person’s ability to perform on the job and how they interact with others.

  12. Re: ‘If every test was valid, high scores would consistently predict high performance’.

    ‘Valid’ in this context was intended to mean ‘supported by objective truth’. If a hiring test was supported by objective truth (which of course, it is not); it would function like an accurate thermometer: higher = hotter …lower number = cooler. That should be quite clear.

    We would like to understand why Dr Arnold finds the concept of perfect validity ‘ludicrous’?

  13. Motion. Based on number of syllables. I am usually critiquing assessments but this is a great one if you are looking for people who look beyond the obvious!

  14. Just a couple of quick comments–the statement I characterized as ludicrous was not the quote cited in Dr. William’s post. However, to clarify, the cited statement that ‘if the test were valid every single hire scoring high on the test would be a top producer’ is ludicrous because validity lies on a continuum and is not an all or none phenomenon. A test or any other hiring measure can be valid for use even though it does not predict perfectly and in the HR arena nothing will do so.

    As for stories of tests not working within organizations, this isn’t surprising insofar as some instruments are not well developed and some employers do not use the right tests for the right jobs. Consistent therewith, I have worked in organizations that had extremely poor interview programs. My reaction was not to conclude that interviews in general are invalid and recommend discontinuing interviewing applicants, it was simply to have a more standardized, structured interview that was related to the specific jobs in question.

    Finally, it is important to note that when an applicant fails on the job, probably more than one component of the hiring process came to the wrong conclusion–generally applicants that get hired tend to have done well on tests, interviews, drug tests, criminal background checks, etc.

    I trust this information is helpful. Again, if anyone is interested in an actual review of the existing science (for interviews as well), I would recommend reading the Psychological Bulletin article I referenced in an earlier post–this article wasn’t even written by anyone within the testing industry.

  15. Dr. Chandler,
    this is a serious question and I hope that you may be able to answer –

    RE LD (Learning disabilities), discrimination and testing. Quite often Many people with LDs are afraid of the stigma attached to disclosing their LD. They would rather not disclose than face discrimination, misunderstanding or underestimation of potential that can come from this being honest. Perfect example a hiring manager told me Yesterday that he would Not consider hiring an individual with ADHD, and did not consider ADD/HD even to be a ‘real disorder’ – protected by the ADA)

    Well here is the Catch, candidate does not want to disclose but needs to have special accomodations for test taking (maybe more time, a different room or location, maybe cannot take on the internet etc.. )

    What then, he discusses his situation with company, company makes ‘resonable accomodations’ and of course rules him out because of his disclosure..

    Well my questions – How much Research has been done on the probability of discrimination of testing? If so how often does it happen? What has been done to avoid it?

    2 Is it not true also that testing can also glean information regarding LD’s and if so then does that also not open the door for discrimination?

    I thank you in advance for your time in responding to these questions.

  16. I have to come to Dr. Dave’s defense. Both recruiters and I/O eggheads are in the profession of identifying skills and evaluating people…it does not matter if we use interviews, resumes, application blanks, pencil and paper tests, or personal meetings…The one thing professionals know based on reading (and doing) innumerable controlled studies is that some test methods are considerably more accurate than others…and, that no single one delivers perfect accuracy.

    I am impressed with your placement results. Whatever you are doing…keep it up. For the rest of us, may I suggest that if we have two applicants for one job, we should always choose the test(s) that consistently deliver(s) the best results (and there is a long list of controlled studies that show unstructured interviews do not ‘make the cut’). Regardless of personal opinion, even an interview and personal meeting would consititute a ‘test’.

    That brings us to the definition of ‘results’. Turnover is only one measure, we also need to evaluate training expense, coaching time, and personal productivity.

    Finally, I think I understand your prior experience with pencil and paper tests…many companies use tests without doing their homework. The only other reason I can imagine that might explain this effect would be bad management.

    Ask a question. Review a resume. Fill out an application blank…it’s all a test. The only quesiton we have to ask ourselves is, ‘How accurate do we want to be?’

Leave a Comment

Your email address will not be published. Required fields are marked *