At more than 70 percent of colleges, placement tests determine whether students need to take remedial courses. If those tests are inaccurate, students may find themselves incorrectly placed on a remedial track and enrolled in noncredit classes that delay them from earning their degrees and increase the cost of their education.
A working paper, one in a series released by the National Bureau of Economic Research in June, suggests that placement tests could be replaced by an algorithm that uses a more wide-ranging set of measures to predict whether a student would succeed in credit-bearing college courses.
The authors developed an algorithm and tested it in an experiment that included 12,544 first-year students across seven different community colleges in the State University of New York system, observing a subsample of students for two years. The goal was to see how placements changed as a result of the algorithm, and whether the algorithm assigned students to college-level courses at higher rates than did placement tests. Researchers also wanted to know whether students placed by the algorithm passed their courses as predicted.
The results were promising. The algorithm yielded more students being placed in college-level classes. Students assigned to class levels by the algorithm were 6.6 percentage points more likely to be placed in a college-level math course and 2.6 percentage points more likely to enroll in a college-level math course. They were also 1.9 percentage points more likely to pass the course in their first term.
The differences were even more stark for English classes. Students placed by the algorithm were 32 percentage points more likely to be put into a college-level English course, 14 percentage points more likely to enroll and seven percentage points more likely to pass the course in the first term.
“Perhaps the biggest relief is that the algorithm’s predictions seemed to bear out,” co-author Peter Bergman, associate professor of economics and education at Columbia University’s Teachers College, said in an email. “This is a relief, because without running an experiment, you don’t know if the assumptions underpinning the algorithm’s validity will hold up in practice. And it seems they did, which is great news.”
The research was conducted by the Center for the Analysis of Postsecondary Readiness at Teachers College and supported by the Institute of Education Sciences, the statistics, research and evaluation arm of the U.S. Department of Education. The experiment showed that students placed by the algorithm were more often assigned to and enrolled in college-level math and English courses, and, as a result, they earned more college credits compared to their peers whose course placements were determined by the usual tests. Students assigned by the algorithm also passed college-level courses at rates on par with their peers.
The findings of the experiment are in line with a long-standing body of research that determined students who took placement tests indicating they should enroll in noncredit, remedial courses often do just fine in college-level courses. There has also been broad recognition among higher education leaders in recent years that non-credit-bearing courses not only slow students’ progress toward graduation but can hurt persistence rates, especially for students of color who have long been overassigned to developmental tracks. Colleges have increasingly turned to other models such as corequisite courses, developmental classes taken alongside college-level courses and additional tutoring and other academic supports to address ongoing concerns about remedial education.
The algorithm, tailored to each of the colleges, used different metrics to assess how likely students were to pass college-level courses, including high school GPA, high school class rank and how much time had passed since high school graduation, in addition to the regular placement-exam scores.
Dan Cullinan, senior research associate for postsecondary education at MDRC, a nonprofit education research organization, said no model can predict with perfect accuracy whether a student is going to succeed in a college-level course.
“There’s going to always be a lot of factors you can’t put in a model that have a big effect on whether or not a student is successful,” he said. There might be aspects of their home life or financial challenges, for example, that affect students’ motivation and drive “that you can’t just throw into a placement model.”
But the accuracy of a model as a predictor of student success is less important than a placement model that ensures more students can take courses for college credit as soon as possible, Cullinan added, because “there’s just really no evidence” that it benefits students to put them in noncredit developmental courses when there’s a chance they could do well in college courses.
Taking fewer remedial courses could not only save students time but also money in tuition costs — $150 per student on average over the course of enrollment at an institution, according to the report. Implementing an algorithm, and the data collection it would require, would come at a cost to colleges, but Bergman said the process doesn’t need to be pricey. If high school transcripts and other data were easier to access, “costs could be driven way down.”
Cullinan doesn’t believe running an algorithm for each student is scalable across community colleges. However, algorithms could help college leaders assess what metrics are most likely to place students in college-level courses and could be used to build better, multifactor placement systems, he said.
Relying on more than exams to place students is already a trend, noted Sarah Ancel, founder and CEO of Student-Ready Strategies, an education consulting firm that partners with colleges to help them serve nontraditional students. She noted that colleges are increasingly using multiple measures to determine the right track for students.
“Colleges across the country have begun using multiple points of reference, and relying on more predictive measures than stand-alone exams, to ensure many more students have access to college-level math and English courses,” she said in an email. “When all students have access to college-level courses, colleges eliminate the risk of discouragement and attrition often faced by students who place into developmental education.”
Algorithms could be a powerful tool, Ancel added, but they should be “equitably designed.” For example, she raised concerns about including standardized test scores as a metric.
“There is ample evidence of inequities in access to test preparation for standardized exams, as well as skewed test results based on race,” she said. “During the pandemic, there were glaring inequities in who was able to take the exams at all.”
She believes using algorithms could mean low performance on any measure will bring the overall score down, which risks sending more students to remedial education than if each metric were considered independently. Her alternative suggestion was a placement system with multiple ways to access college-level courses.
The authors of the working paper acknowledged some researchers have concerns that algorithms can perpetuate racial or socioeconomic biases in the data used. They noted that while the algorithm did not close gaps in access to college-level courses, it also did not exacerbate them. The algorithm boosted placement rates in college-level classes across groups.
Notably, placement rates for Black students into college-level English increased relative to white students, and placement rates for women into college-level math rose relative to men. Hispanic students’ placement rates in college-level math and English increased as well, though the increase for math courses was less than for white students.
“If anything, our algorithm generally improves equity across groups typically underrepresented in college-level courses,” Bergman said. “So, in this instance, administrators should favor this approach for both equity and efficiency.”