As Colleges Move Away From the SAT, Will Admissions Algorithms Step In?

College Admissions

As Colleges Move Away From the SAT, Will Admissions Algorithms Step In?

By Rebecca Koenig     Jul 10, 2020

As Colleges Move Away From the SAT, Will Admissions Algorithms Step In?

This article is part of the guide: Sustaining Higher Education in the Coronavirus Crisis.

This story was co-published with Slate Magazine.

Back before the internet made it possible—and popular—for people to document their lives in real time, teenagers found themselves preserved between the pages of their high school yearbooks—forever young. Enshrining cliques and clubs, acne and braces, these artifacts capture students as they are, in the present.

Yet many yearbooks also make predictions about the future. There’s a tradition of bestowing “superlatives” to members of the senior class who stand out as the “class clown,” “biggest flirt” and “most athletic.” Most of these awards reflect largely innocuous teenage concerns, but one superlative in particular feels more adult, more consequential (and perhaps a little less fun): “Most likely to succeed.”

This title isn’t meant for the smartest kid or the most popular kid—there are separate categories for those distinctions. No, this designation is for the student who is going places, whose ambition and abilities and je ne sais quoi will surely take her far beyond her high school’s halls. It’s an endorsement, an expectation and a prophecy from the graduating class. We believe in you.

The question of who’s most likely to succeed also drives the world of selective college admissions. And while that process is more formal than an ad hoc election for yearbook awards, from the outside, it can feel as opaque, and the results as idiosyncratic. At least high schoolers voting on superlatives get four or more years of exposure to their classmates before placing bets on their prospects; college leaders get mere months to identify that nebulous special something they’re looking for in applicants.

In addition to assessing students’ grades and essays, admissions officers have long looked to the SAT and ACT to help them decide who will make it in their campus settings, and beyond. But the COVID-19 pandemic has led to the sudden decision by many colleges to make submitting such scores optional. Even the College Board, maker of the SAT, advised colleges to be flexible about requiring the test in the upcoming admissions cycle because of challenges students face getting to in-person tests and glitches in the group’s efforts to administer exams remotely.

Of course, test scores are just one piece of data colleges turn to when predicting which students are likely to excel in rigorous courses, enrich campus life with a unique perspective, graduate in four years, or even help balance the books with a large tuition check. But the hole left by the SAT and ACT means more colleges will likely be looking for new ways to help sort out who gets their scarce slots.

Enter the algorithms.

Companies selling admissions algorithms say they have a fairer, more scientific way to predict student success. They use games, web tracking and machine learning systems to capture and process more and more student data, then convert qualitative inputs into quantitative outcomes. The pitch: Use deeper technology to make admissions more deeply human.

“I think this is going to be more heavily relied on, with less access to students in person, test scores, and reliable grades, at least for the spring semester and even going forward next year,” says Marci Miller, an attorney who specializes in education law and disability rights.

But Miller and other skeptics wonder whether the science behind these tools is sound, and ask if students’ data should exert so much control over their destinies. They question whether new selection systems create opportunity for more students at college, or just replicate a particular model of student success.

“The reason these are being marketed as making the process more equitable is, that’s the delusion that’s been adopted in the tech sector,” says Rashida Richardson, director of policy research at the AI Now Institute at New York University. “That’s the tech solutionism in this space, thinking that very complex and nuanced social issues can be solved with computers and data.”

Defining Success

Higher education is rife with buzzwords that come in and out of fashion, often tied to theories that promise to help the field make progress toward solving stubborn problems, like low graduation rates.

Popular right now is the idea of “student success.” Colleges want to support it, measure it, predict it. It sounds unobjectionable, and easy to swallow. But the concept’s slick coating also makes it slippery.

“The term ‘student success’ is extremely vague in higher education, for something that is thrown out there a whole lot,” says Elena Cox, CEO and co-founder of vibeffect, a company that sells colleges tools designed to improve student enrollment and retention rates.

How colleges define the concept affects their admissions process and influences what student data institutions collect.

If a successful student is one likely to have strong first-year college grades, then the SAT may be the admissions tool of choice, since that’s what it predicts.

“Correlating with first-year GPA is not trivial because if you don’t make it through the first year, you won’t make it to graduation,” says Fred Oswald, a psychology professor at Rice University who researches educational and workforce issues and advises the Educational Testing Service about the Graduate Record Examination.

Or if success looks like a student likely to graduate in four years, high school grades may matter more, says Bob Schaeffer, interim executive director of the National Center for Fair & Open Testing, an organization that advocates against reliance on standardized tests.

“We encourage schools to define success as four-year, or slightly longer, graduation rates,” Schaeffer explains.

But good high school grades don’t always predict timely college completion. A Boston Globe analysis of more than 100 high school valedictorians from the classes of 2005 to 2007 found that 25 percent didn’t get a bachelor’s degree within six years.

So some colleges try to dig deeper into the student psyche to figure out whether an applicant has what it takes to stay on track to earn a diploma. Admissions officers may try to discern “grit,” a quality studied by Angela Duckworth, psychology professor at the University of Pennsylvania. Or they may look out for students who seem confident, realistic about their own weakness, and able to work toward long-range goals—three of the eight “noncognitive skills” identified by William Sedlacek, professor emeritus in the University of Maryland College of Education.

There’s been growing interest among colleges in this kind of “holistic admissions,” in part due to the movement—well underway before the pandemic—to make test scores optional, according to Tom Green, an associate executive director at the American Association of Collegiate Registrars and Admissions Officers.

“When used in combination with GPA, [holistic admissions] can greatly increase the predictive quality of success,” he says. “I think people are really looking for more equitable ways of being able to identify good students, especially for groups of students who haven’t tested well.”

Admissions With Algorithms

One of those ways may be through mobile games. The pastimes produced by the company KnackApp are designed to feel as fun and addictive as popular diversions Candy Crush and Angry Birds. But this play has a purpose. Behind the scenes, algorithms allegedly gather information about users’ “microbehaviors,” such as whether they repeat mistakes or take experimental paths, to try to identify how players process information and whether they have high potential for learning.

Just 10 minutes of gameplay reveals a “powerful indication of your human operating system,” says Guy Halfteck, founder and CEO of KnackApp. The games are designed to “tease out, to measure and identify and discover those intangibles that tell us about the hidden talent, hidden abilities, hidden potential for success for that person.”

Colleges outside the U.S. already use KnackApp in student advising, Halfteck says, as does the Illinois Student Assistance Commission. For admissions, colleges can use the platform to create gamified assessments customized to the traits they’re most interested in measuring and include links to those games in their applications, or even tie them to QR codes that they post in public places.

Unveiling students’ hidden characteristics is also the aim of companies that record video interviews of applicants and use algorithms to analyze student “microexpressions.” That kind of tool is being used experimentally at Kira Talent, an admissions video interview platform. But it might not be ready for prime time: Kira Talent CTO Andrew Martelli says the science isn’t solid yet and recommends human admissions officers use rubrics while watching recorded interviews to make their own assessments about students’ communication and social skills.

Meanwhile, colleges hoping to measure more prosaic matters, like whether a particular student will actually enroll if accepted, may turn to tools that track their web browsing practices. At Dickinson College, admissions officers track how much time students who have already made contact with the school spend on certain pages of the institution’s website in order to assess their “demonstrated interest,” says Catherine McDonald Davenport, vice president for enrollment and dean of admissions there.

“That’s not telling me something specific,” she explains. “It’s giving me a point of reference of what people are looking for without being known.”

And many colleges employ the comprehensive services of enrollment management firms, whose machine learning tools try to detect patterns in historical student data, then use those patterns to identify prospective new students who might help colleges meet goals like improving graduation rates, diversifying campus or moving up in rankings lists.

“What the machine can do that human beings can’t do is look at thousands of inputs,” says Matt Guenin, CCO at ElectrifAi, a machine learning analytics company. “Sometimes an admissions process can be extremely subjective. We are bringing far more objectivity to the process. We’re essentially trying to use all the information at their disposal to make a better decision.”

Battling Bias

Questions about equity are top of mind for skeptics of algorithmic admissions tools—along with worries about whether they’re reliable (have repeatable results), valid (they measure what they claim to measure) and legal.

“My major concern is that they are often adopted under the guise of people believing data is more objective and can help bring more equity into the process,” Richardson says. “There is tons of research that these systems are more likely to hide or conceal pre-existing practices.”

They also simply may not work. While some vendors publish white papers that seem to offer proof, critics argue that this evidence wouldn’t necessarily hold up if put through the peer review process of a reputable scientific journal.

Such self-assessments don’t always reveal whether tools treat all kinds of student users fairly.

Bias can sneak into these kinds of predictive models in several ways, explains Ryan Baker, associate professor at the University of Pennsylvania Graduate School of Education and director of the Penn Center for Learning Analytics.

Models built primarily with data from one group of learners may be more accurate for some students over others. For example, Baker has found teachers and principals of suburban schools that serve middle-class families to be pretty receptive to participating in his research projects, while leaders at schools in New York City have been warier and more protective of student data.

“It’s easier to get data for white, upper-middle-class suburban kids,” he says. “Models end up being built on easier data.”

Meanwhile, models built on historical data can end up reflecting—and replicating—historical prejudices. If racism has affected what jobs students get after they graduate, and that data is used to train a new predictive system, then it may end up “predicting that students of color are going to do worse because we are capturing historical inequities in the model,” Baker says. “It’s hard to get around.”

Additionally, algorithmic admissions practices could run afoul of the law in several ways, Miller says. Collecting student information without consent may violate data privacy protections. And tools that “screen students out based on disabilities, race or income in a discriminatory way” may be illegal, even if that discrimination is unintentional.

“With any algorithmic discrimination, the information put in is the information that comes out,” Miller says. “It can be used for good, I suppose, but it can also be used for evil.”

Rethinking Predictions

Tipping the scales closer to “good” may mean rethinking the role of algorithms in admissions—and reevaluating whom colleges bet on as most likely to succeed.

Rather than use equations to pick only students who already seem stellar, some colleges try to apply them to identify students who could thrive if given a little extra support. The “noncognitive traits” Sedlacek identified as crucial to college success are not fixed, he says, and colleges can teach them to students who arrive without them if the institutions have data about who needs tutoring, counseling and other resources.

Selective colleges could learn a thing or two about how to do this well from colleges with open enrollment, Sedlacek says: “The trap of a very selective place is they figure, ‘All our students are great when they start, they don’t need anything.’”

Using algorithms in this way—“identifying students who are deemed to have risk”—can lead to its own forms of bias, Cox points out. But proponents believe the practice, if done well, has the potential to include, rather than exclude, more students.

Algorithms can also help make admissions less focused on evaluating individuals in the first place. Rebecca Zwick, a professor emerita at University of California at Santa Barbara and a longtime admissions researcher who works for Educational Testing Service, is developing a constrained optimization process that builds cohorts of students instead of selecting them one at a time.

From a large pool of students, the algorithm can produce a group that satisfies specific academic requirements, like having the highest-possible GPA, while also hitting targets such as making sure a certain percentage of selected students are the first in their families to pursue college.

When Zwick tests the model against real admissions decisions colleges have made, her algorithm tends to produce more impressive results.

“Often the overall academic performance of the class admitted through the optimization procedure was better, while simultaneously being a more diverse class as well,” she says.

Yet Zwick, author of the book “Who Gets In? Strategies for Fair and Effective College Admissions,” says she’s not sold on handing admissions decisions over to technology.

She believes humans still have an important role to play in making high-stakes selection decisions. That view is shared by the other academics, the lawyer and the policy director interviewed for this story, who say it’s up to humans to select tools thoughtfully in order to prevent and combat the ill effects algorithmic bias may have in admissions.

“People should be trying to look for it and trying to fix it when they see it,” Baker says.

Since the pandemic started, Davenport, the admissions director, has been inundated with marketing material about admissions technology products she might use at Dickinson College.

“Everybody seems to have an idea and a solution for my unnamed problem,” she says. “It’s somewhat comical.”

But even as her team makes use of some high-tech selection tools, she counsels them to wield this kind of power with restraint.

“There are a lot of schools that will use every single possible data point they get their hands on in order to inform a decision,” Davenport says. “We want to treat that information with integrity and respect.”

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Sustaining Higher Education in the Coronavirus Crisis

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up