Getting Past Placement

EdReady as a Low-stakes Solution for Readiness

Download the Whitepaper

Published August 28, 2020

The Problem with “Placement”

There are numerous studies and publications that address the myriad philosophical and practical problems with placement testing as practiced by the majority of postsecondary institutions today. Pressure continues to rise locally and nationally for institutions to change their placement practices, and state-level policies are being adopted that are forcing many of these changes to occur. However, just because the existing practices are ineffective doesn’t mean that the problem they are supposed to address does not exist. Quite the contrary, there is ample evidence that more and more students who are seeking postsecondary credentials are not fully prepared to succeed in their studies, and the deficiencies are most glaring in math and English, and for underprivileged populations, as they have always been.

What we need is a framework for better describing the shape of the problem at hand. Ideally, this framework would also suggest some specific tools and practices that could be deployed to manage the logistics and integration with institutional departments and policies. Our work in this area has given us the opportunity to develop just such a framework, and much of that framework is best understood from the context of the figure below.

A graph with a y axis depicting "confidence in results" and x axis depicting "preparedness for college" shows an inverse bell-curve with somewhat arbitrary shading of left, middle, and right sections
The challenge of traditional "placement" models

This figure is a simplified illustration of the administrative perspective for students matriculating to any given institution. The data acquired from a placement test––or any other metric––allow us to group students into three rough categories, indicated in the figure by different colors. Students who are very well prepared are relatively easy to discern since they will come in with strong evidence of that preparation and / or will perform well on any diagnostic we might administer. Similarly, students who are very weak should also be easy to identify. If a student falls into either of these categories, equivalent to occupying one extreme or the other in the figure shown here, then we can act on those data with some confidence. The problem is that most students are not so clear-cut and fall into a middle zone––shown in yellow here––where their level of preparedness is unclear. It is essentially impossible to establish a rigorous placement standard (often called a “cut score”) for these students. Moreover, the boundaries among these three categories are blurry: we really cannot be confident in our “placement” unless the student is really close to one extreme or the other.

It is our contention that this figure does not vary substantially depending on the form of the measurement; in other words, the problem we face with “placement” cannot be entirely fixed by using different tests, or testing against different expectations (e.g., statistics versus algebra), or even using multiple measures. The fact is that we should not be using this information to sort and separate students in that middle zone; instead, we need to provide a way for those students to clarify their readiness status without holding them up.

Download the Whitepaper