Every year thousands of industrial-organizational psychologists gather for our society’s annual conference. This conference always proves to be an interesting and fun event chocked full of useful information. Readers who are unfamiliar with SIOP should definitely check it out. While much of the conference is highly academic, there is probably no other place where one can learn more about the actual implementation and measurement of assessment tools.
One of the most exciting things for me at this year’s conference was the launch of SIOP’s new blog/interactive community site, the SIOP Exchange.
I was part of a team that created this blog in order to help promote I-O psychology and build an increased sense of community amongst SIOP members and other interested parties. I encourage those folks in the ERE community who are interested in the viewpoint of I-Os on topics related to our work to check it out. The Exchange offers RSS feeds that will help keep you aware of topics that may be of interest to you.
In addition to launching the blog, this year I participated in several panels in which assessment solution providers and the end users of assessments discussed important issues related to technology and testing. It is rare to such varied experience and expertise in the use of assessment in one place. I want to share some of the hot topics with ERE readers to help keep the ERE community updated on how testing and assessment experts are handling important issues that impact the use of technology based testing. Here is a quick rundown of some of the themes that were represented.
Technology is more accessible than ever: We have reached a point where the differences between the technology platforms of assessment providers have started to level off. Almost every company offering assessments now has a relatively sophisticated platform that can handle the basics of test delivery, scoring, and reporting. A good deal of providers also offer a nice candidate management system as part of their platform. One interesting facet of this movement is that I-O knowledge is starting to get “baked in” or embedded into the technology system. This trend is going to help make quality technology based testing available to small and mid-market companies. While I believe this to be a positive trend, we still need to be aware that there are trade-offs to be made. Several of these are discussed below.
Test security. As always, there was a good bit of discussion about the security of Internet testing. One of the biggest issues was the use of proctored vs. unproctored assessments. While there are some firms that currently do not allow their assessments to be used in an unproctored environment, the majority of providers will allow it. We are starting to see a variety of interesting methods to help mitigate cheating. My takeaway is that each situation dictates the need to decide between proctored or not. I believe that there are enough security strategies available that the negative impact of cheating is likely to be minimal.
Computer-adaptive testing. One of the most effective strategies to help thwart cheating is the use of tests that draw on large item banks in order to help ensure each test is different while also adapting the test content to the test taker’s ability level. CAT allows for shorter, more accurate tests. While it has been in use for years in the world of standardized testing, the leaders in the pre-employment testing community are starting to adopt this technology for their assessments. This marks a significant step forward in both security and usability.
Defining performance standards. We are still struggling with the line in the sand when it comes to thoroughly and accurately defining the performance dimensions to which an assessment will be linked. The rise of technology based hiring platforms has led to the streamlining of what has traditionally been known as job analysis. While this may be OK in some cases, we are still struggling to understand at what point we are taking liberties. My opinion is that thorough job analysis is always a good idea. Especially given increased activity by the OFCCP.
OFCCP audits. I had several conversations about increased activity in the area of OFCCP audits. This makes the use of best practices for assessment (job analysis, validation studies, documentation of adverse impact) even more important than ever. The cost of doing things right is likely to be much less than the fine you will receive if your audit does not go well.
Article Continues Below
Simulations (more to come in the Journal). The use of simulations is the cutting edge of our field right now. The offerings in this area are starting to increase in sophistication but are still mostly limited to call center and in-box type assessments as these translate quite well to a simulated environment. We have a long way to go in this area but I have seen notable progress in the right direction over the past year.
Technology and development. Assessment providers are continuing to link their pre-employment assessment products to onboarding and development products. This is a logical step when one uses competency models and understands that the pre-employment dialogue with a candidate offers useful data that serves to provide a baseline. End users of assessment have not fully caught on to the value of this viewpoint.
Every situation is different. One thing I gleaned from listening to assessment practitioners speak about their work is that every situation presents it own challenges. The contexts in which assessment is to be used vary quite a bit. Those seeking to use assessment correctly and ensure that it provides the level of ROI we know is possible need to ensure they enlist the help of an expert. There are lots of judgment calls to be made and it pays to have an expert to provide insight when important decisions are to be made.
Globalization. I-O psychology is more global than ever. The use of assessment has been rapidly spreading across Asia and Europe. It will be interesting to begin having access to data that can help us to understand the commonalities and differences across cultures and geographical locations.
A final overall impression is that we I-Os are still marginalized and underused. It was nice to sit in a room full of folks who know assessment works, know how to make it work, and can prove the value it adds, but frustrating to know that we are often not even given a seat at the table when important decisions are being made.