3 Ways Educational Technology Tools Predict Student Success

3 Ways Educational Technology Tools Predict Student Success

By Eric Horowitz     Jul 13, 2015

3 Ways Educational Technology Tools Predict Student Success

One of the goals of academic assessment is to identify which students need help; the sooner they can be identified, the better. The promise of technology has been that its ability to collect unique data could make this process timelier, more accurate, and less burdensome.

But how might technology actually go about fulfilling this promise? Thus far, academic research suggests that technological tools can predict outcomes by collecting or analyzing data according to the following three different categories.

Usage

One way of predicting outcomes is simply by measuring how much students are using curricular materials—it’s essentially drawing conclusions from computerized attendance takers. For example, a new study led by Iowa State’s Reynol Junco (who once declared that “Most ed-tech startups suck!”) examined whether engagement with online textbooks could predict classroom outcomes. Using data from over 200 students across 11 college courses, he found that the number of days students used the textbook could predict course performance, and that this was actually a better predictor than previous course grades.

Meanwhile, another recent study, led by Nynke Bos of the University of Amsterdam, suggests that data on time spent watching online lectures, when combined with data on class attendance, can also predict course outcomes. These studies suggest that simply knowing how often a student decides to prioritize the class can provide an early warning about which students are struggling.

Engagement

A second way to predict student outcomes is to focus on data relating to student engagement. This research, which tends to involve data from learning management systems (LMS), goes beyond whether or not somebody has opened a book and shows that a variety of specific behaviors (e.g. posting a message) can also be indicative of future course performance.

For example, a 2010 study led by Leah Macfadyen of the University of British Columbia examined activity on Blackboard from five undergraduate classes. The researchers found 15 different variables, such as number of discussion messages posted, and number of messages sent, that were correlated with final course grades. The data was used to develop a model that accounted for about 30% of the variance in final course grades and identified over 80% of students who would go on to fail the course.

Another example comes from a group of researchers led by Andrew Krumm who went a step further and actually used LMS data to design an early warning system for college students in a STEM program. Their evaluation did not have an extremely rigorous experimental design, but the researchers found evidence that using LMS data to place students into different categories of need improved the average GPA of the cohort across a three year span.

Knowledge/Skill

The most difficult technological (and computational) achievement is predicting student outcomes by actually evaluating their knowledge. The end goal is to have computers that can grade as accurately as humans, or better, which would ultimately allow for more frequent and painless assessment than would otherwise take place (arguably blending learning systems are approaching this point in mathematics.)

While tools built to assess skills are most commonly associated with math and writing, a new study led by Stanford’s Paulo Blikstein shows predictive data can be gathered in computer science. Specifically, Blikstein and his colleagues investigated whether machine learning algorithms could predict computer science course grades based on the progression of a student’s code in single assignment. This effort required no specification about what a good piece of code should look like. The algorithm—specially, a cluster analysis algorithm—simply clustered groups of students together based on how their code changed from one attempt to the next. It’s akin to having students write multiple drafts of an essay, and then having a computer group together those students who appeared to make the same kinds of changes from one draft to the next.

After looking at how the computer grouped students the researchers were easily able to determine that one group contained gradually improving code while another group contained code that continuously ran into roadblocks. The link between these group categorizations and final course grades wasn’t overwhelming, but the researchers did find evidence that the information provided by the machine learning algorithm—that students were in the “good code” or “bad code” cluster—was predictive of course outcomes. Specifically, students with better code (as categorized by the algorithm on a single assignment) performed an average of 7.9% better on the class midterm that students with poor code.

First Steps

For these studies to be meaningful it requires effectively equating final grades with learning, and the two clearly are not the same. But by linking quantifiable student actions—whether it’s opening an online textbook or typing a piece of code—to actual outcomes, each of these studies demonstrates a different way that technology can allow teachers to get a better sense of which students are learning without additional formal assessments.

Eric Horowitz is an EdSurge columnist, social science writer and education researcher.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up