Giving Harder, Better, Faster Feedback

Learning Research

Giving Harder, Better, Faster Feedback

By Eric Horowitz     Oct 5, 2015

Giving Harder, Better, Faster Feedback

Computer software can tell whether students gave the correct answer on a test. Some digital math tools can even diagnose what procedural step may have been missed.

But can computers go beyond telling students how they did and what they should do by giving guidance on how to think? That is, can a computer’s assessment abilities allow it feedback that helps students advance their thinking about complex problems? And can it be as good as human at it?

The results of a new study on “knowledge-integration” feedback provide the latest evidence that this reality is getting nearer.

Unlike typical computerized feedback, which simply tells whether a “right” or “wrong” answer was given, “knowledge-integration” feedback is designed to help students update and link ideas in order to understand complex phenomena. For example, answering questions about the consequences of mitosis requires combining ideas from a variety of different domains, as well as different ideas within each of those domains. (An example of knowledge-integration feedback: “You described how Plant B affected cell division. How did Plant B do this? Why does this matter for curing cancer? ”)

The new study, which appears in the Journal of Educational Psychology, investigated the impact of knowledge-integration feedback when it was provided by computers and by human teachers. Researchers from UC Berkeley, the University of North Carolina, Carleton College, SRI International, and the Educational Testing Service (ETS) conducted a series of experiments involving middle school classrooms (the number of participants ranged from 124-270). Students learned inquiry science units from a computer system called WISE (web-based inquiry science environment), with each unit containing an embedded assessment comprised of a short essay or a diagram (An example question: “Remember that cancer is mitosis out of control. Why or why not would you recommend this plant as a medicine for cancer? ”) After completing the initial assessment, students received different kinds of feedback and completed the assessment again.

To create the feedback researchers reviewed hundreds of student responses to different questions and created pieces of knowledge-integration guidance that they believed would give helpful. Each piece of feedback included prompts to elicit, add, distinguish and integrate ideas, and different feedback was created for students at different levels of knowledge.

Students were divided into two groups. For the first group, an ETS tool called c-rater was used to score their WISE assessment essays on the question of how the sun helps animals survive. Based on the score, the computer chose a piece of knowledge-integration feedback from the pre-populated list. For the second group, teachers read the essays and assigned feedback from the list. Students then revised their essays, which were graded by the researchers based on a rubric.

The results revealed that both group of students improved on the assessment essay. More tellingly, who (or what) assigned the feedback didn’t seem to matter—there was no statistically significant difference in improvement between students who received feedback chosen by a teacher and those who received feedback chosen by a computer.

A follow-up experiment replicated these findings with a slightly different design. This time the WISE assessment involved drawing diagrams of atoms. One group of student had had the c-rater evaluate their work and assign feedback, while the other group of students received feedback from two expert science teachers. Once again, students in both conditions improved their diagrams after receiving feedback, but neither source of feedback proved to be better than the other.

A final experiment found that, when it came to diagramming how plants get energy from the sun, knowledge-integration feedback was also beneficial relative to the kind of procedural feedback often used in math programs. In one condition students received the knowledge-integration feedback (e.g. “Review step 3.9 to find out how plants get energy during cellular respiration”); in the other condition students received specific feedback about what their diagram was missing or what was abnormal about it (e.g “Improve your diagram to show that plants use energy stored in glucose during cellular respiration”). All students improved their diagrams after receiving feedback, but students who received the knowledge-integration feedback made larger gains.

Taken together these results suggest that computerized grading of science assessments can effectively create clusters of students who should receive similar feedback. This provides some evidence that we’re getting to closer to the point where machines can tell students specific things the students got wrong and then assign prompts that help students succeed at the difficult process of combining complex scientific ideas.

Such an approach also highlights how technology can amplify the reach of an effective teacher’s work. If computers can accurately assign the feedback, the teacher can save time by writing a few pieces of guidance and letting the computer assign them. This could allow teachers to give feedback on assignments they otherwise wouldn’t have had time to carefully grade.

“If the computer could assign knowledge integration guidance effectively, ” write the authors, “then the teacher could spend more time planning instruction and working with students who need additional help.”

Eric Horowitz is an EdSurge columnist, social science writer and education researcher.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up