The development of Internet-based applications has been the result of an evolutionary process in which technology is leveraged to build new and better ways of doing things. The collective result of the individual steps in this evolutionary process is the development of new standards that represent a quantum leap forward from the old ways which they have been built to replace. This type of evolution requires the development of applications that set the pace by using technology to introduce new approaches to old problems. Those applications that offer a clear improvement over existing approaches force the rest of the market to evolve, until these new ideas represent a standard from which the next phase of evolution will begin. Online scientific screening offers an excellent example of this trend. By this time almost every scientific screening vendor has one or more web-enabled products (for the purpose of this article I will refer to these products as “online assessments”). But it is important to recognize that not all online assessments are created equal. While there are a number of major “points of differentiation” that define the level of technological innovation associated with online assessment products, perhaps none is as important an indicator of a vendor’s level of technological evolution than reporting functionality. The Importance Of An Evolutionary, Systems Perspective Examining the technological advances in online scientific screening requires that we begin shifting our view of what online assessment is. Specifically, it is important that we no longer view online assessment as the delivery of individual tests that measure a specific trait (or set of traits). Instead we must begin looking at online assessment from a more holistic, systems perspective, in which the assessment itself is just one component of a larger technology-based system that is designed to accomplish the task of matching a person to a job. One of the ramifications of the systems perspective is that we cease thinking about assessments as “tests” and start thinking about them as software. This viewpoint opens the door to a new way of understanding the evolution of online assessment, because it allows us to use ideas about the evolution of software to better understand the forces that are driving the development of online assessment systems. This new perspective provides us with the ability to use Meir Lehman’s “Law of Declining Quality” to explain the forces behind the evolution of online assessment systems. Meir Lehman is a computer scientist who has been studying the topic of software evolution for over 30 years and has formulated a set of basic laws that he feels govern the evolution of all software (and hence all Internet-based technology systems). Lehman’s “Law of Declining Quality” states simply: “The quality of software systems will appear to be declining unless they are rigorously adapted, as required, to take into account changes in the operational environment.” The Law of Declining Quality provides an excellent explanation of the forces behind the evolution of online assessment systems. Almost all online assessment products use content that has been created by professionals and has a large amount of evidence to support its effectiveness. But nowadays, it is not enough to simply web-enable this old content. We are entering an era when the major differentiator between vendors is the system that wraps around this test content. The more useful a system’s functionality and the more technology is used to create new ways of leveraging the information it provides, the more evolved it is. The market’s demand for newer systems that do a better job of solving old problems represent the “changes in the operational environment” that Lehman feels create the demand for continued adaptation and change. As more innovative systems are released, the market favors those that provide new ways of solving old problems and eventually, other systems will be forced to adapt to these standards or be seen as outdated and of little practical use. When it comes to online assessment, there are many points of differentiation that demonstrate a vendor’s level of evolution, but in my mind reporting functionality is one of the best examples. Why Is Reporting Important? Reporting is the main avenue by which the results of the assessment process are communicated. At the end of the day, it is humans who make the judgments about whom to hire. The role of the technology system is merely to provide them with the information they need to accurately and efficiently make these decisions. In support of this objective, the goal of reporting is to provide those making decisions with an easy way to access two basic types of information:
- Information about how well an individual matches with the requirements of a job.
- information about how well this individual’s predicted ability to do the job compares to that of other applicants for the same job.
Article Continues Below
Guide: Practical Tips for Remote Hiring
The ability of a system to efficiently provide decision-makers with this information makes reporting functionality one of the key determinants of the ultimate effectiveness of any online assessment system. The Evolution Of Reporting The systems perspective mandates that reporting information go beyond the mere regurgitation of results. The integration of technology into the online screening process has fundamentally altered the reporting process. Assessment products that provide reporting as a function of an individual test or assessment are cumbersome and require decision makers to expend a great deal of time and effort. This type of reporting model will not be able to compete as online screening technology continues to evolve. The systems of the future will view reporting from a systems perspective that will result in flexible, dynamic reporting that cuts across individual assessment content in order to help facilitate an ongoing decision making process. A clear understanding of the advantages of this perspective requires a brief look back at the evolution of reporting technology. Paper and Pencil Reporting: The Stone Age In the past, reporting consisted of the generation of a paper-based score profile that often required an expert to read and interpret. This model also required that individual test results be sent off for processing. Once processed, a report providing an overall score for the applicant as well as individual scores on relevant performance dimensions was provided to those responsible for making hiring decisions. These results were manually reviewed in order to provide an idea of an applicant’s suitability for a job and to determine how applicants compared to one another. In this reporting model, assessment results were provided on a one-time basis and were almost never used for any other purpose. Reporting in this type of system was static and resource dependent, often requiring the hiring of outside consults and in-depth training to support the use of the system. The complex logistics of this model combined with the fact that this type of reporting required hiring managers to manually combine scores and rank applicants represented a major obstacle to the adoption of scientific screening. Web-Enabled Static Reporting: The Bronze Age In the early days of the Web, technology was primarily used to reduce the administrative complexity of assessments. In terms of reporting, the major advantages of this model over the previous one lie in its ability to use technology to score tests in real time, to provide rapid results via email, and to provide some additional flexibility and usability to the reports themselves. These things allowed users to skip some of the mechanical combination activities that were so resource draining while also facilitating a more rapid exchange of information. However, simply web-enabling test content does not really represent an evolutionary milestone, as it does not really follow a systems perspective. In this model, assessments are still treated as individual entities and the majority of focus is still on the test content rather than the overall system. While this model does provide some advantages in terms of reporting, these advantages are mostly with the delivery of the report, rather then its substance. In most of these systems reports are static, being provided via a PDF file or free standing document. These reports provide users with little “on the fly” flexibility when it comes to accessing and manipulating information about a ability to perform a job relative to other candidates. This means that these systems still place a burden on hiring managers because they require them to mechanically compare candidate information when making decisions. This model does not provide a suitable way to help hiring managers efficiently deal with the high applicant volumes that characterize the current market and thus perpetuate biases against adopting online assessment. Unfortunately, a large number of online scientific screening providers are still firmly planted on this evolutionary plateau. Dynamic Web-Based Reporting: The Industrial Revolution The current state-of-the-art in online assessment reporting represents a quantum leap beyond the previous step. Entering the game at this level requires a systems perspective in which the focus is not on individual assessment content, but rather on integrating the assessment into a complete system that is designed to help users make more effective hiring decisions. These systems are able to take the information that is collected from an applicant during the assessment phase and use technology to allow the resulting data to be sliced and diced in a variety of ways based on the needs of the user. I like to think of these systems as a dashboard from which the hiring manager can both monitor the functioning of the system as well as access critical information in real time. Dashboard systems provide a user interface that serves as a central place for system users to go to access any of the information they need about a candidate. This interface provides several levels of information that can be accessed based on the needs of the user. At the highest level, the user can see one overall recommendation about the suitability of a given applicant as well as understand how well an applicant compares to other applicants for the same position. Drilling down to the next level provides users with detailed information about an applicant based on the individual dimensions that define performance for the job in question. Some systems also provide an even deeper level of reporting which provides users with the ability to view detailed narrative examples of how an applicant is likely to handle specific situations. Many dashboard systems also provide a “line out” for critical assessment data, such that the data gathered from an applicant can be used for many other purposes. For instance, assessment results can be used for the generation of structured interviews that provide extra information about an applicant’s potential weaknesses. Assessment information can also be used to evaluate an applicant’s suitability for other open positions within the organization. In the case of applicants who are hired, assessment data can be used as an instant baseline for training and development and performance management purposes. This provides the ability to instantly create developmental plans that can be used as a bridge between employee selection and development systems. Overall, the advantage of the systems perspective lies in its ability to use technology to provide a means for using assessment data in a flexible, dynamic fashion via an interactive environment that makes it easy for the user to look at the same basic data in many different ways for many different purposes. The flexibility and ease of use associated with these type of systems will provide a major step in the removal of barriers that have long opposed the adoption of assessment. The Future Reporting functionality is an excellent example of how Lehman’s ideas about the need for response to external demands will push forward the evolution of online assessment. The market is creating evolutionary pressure that mandates a change in perspective from the level of the individual assessment to a systems level, in which the assessment content is a small part of a larger system that empowers users rather than enslaving them.