From Futile Reviews to Meaningful Student Feedback

Opinion | Higher Education

From Futile Reviews to Meaningful Student Feedback

By Ken Ryalls     Apr 22, 2016

From Futile Reviews to Meaningful Student Feedback

Student ratings of instruction inspire passion. This feedback can have a very personal effect on those who teach. As a former professor, I can still remember some of the most scathing reviews I got from students. It doesn’t go away. So while I do not deny that student feedback is sometimes misguided, useless or even downright cruel, we must keep in mind that students spend more time observing the faculty member’s teaching than anyone else on campus. They’re uniquely positioned to provide meaningful insight into our teaching.

This academic year in particular has seen a rash of articles in the popular and pseudo-scientific press about the uselessness of student ratings of instruction and/or course evaluations. These attacks, frequently citing flawed research or evidence that simply isn’t there, drive readership. In addition, there continues to be a high level of vitriol surrounding student ratings by a minority group of academics who wish to silence the voice of the student.

In order to get a complete picture of instruction, we must continue to insist that students’ voices be heard. They take on tens of thousands of dollars of debt across their educational experience. Because most students invest heavily and care passionately about their education, we owe them at least one opportunity—if not more—during a semester to provide input about their learning experiences. That feedback is valuable to the instructor as it can help them improve their teaching, and it is valuable to the institution as it provides another set of data that can be used to help evaluate, support and grow its faculty.

How to Find Useful Answers in Student Feedback

Students are qualified to report what they observe happening in class. They are also capable of rendering judgments about how much they perceive they learned in the course, how well the course was delivered and their desire to take the course. If we want meaningful feedback, we shouldn’t ask students to evaluate individual characteristics of faculty; but rather we should ask for their insight into instructional aspects of the course. Here are a few questions to solicit meaningful feedback:

  • Ask about observed teaching methods, such as “Did the instructor incorporate projects, tests or assignments that required original or creative thinking?”
  • Ask about perceived learning outcomes, such as “Have you developed specific skills, competencies and points of view needed by professionals in the field most closely related to this course?”
  • Ask about achievement of those learning outcomes, such as “Did you learn how to find, evaluate and use resources to explore a topic in depth?

Just as faculty must have confidence in the system, students must also be assured their responses will remain confidential. Inform students that data will be held in a secure environment, will only be analyzed at the class level, and that results presented to the instructor will not be associated with any identifying information. Informing students about modifications made in the course based on previous student feedback also serves as encouragement to complete the ratings.

The fact that student ratings continue to be used on most campuses reinforces their value. The typically high response rates of the students surveyed, and the relatively low cost per class for conducting ratings, suggest that student ratings are a practical, if not the most practical, approach to obtain feedback about instruction. In many instances, powered by mobile devices and automatic response systems, faculty can obtain instant feedback multiple times while the course is being conducted, so that they can measure the effects of their adjustments in teaching methods in response to previous feedback. What other system provides such immediate feedback from multiple observers?

Institutions need a system in place for evaluating teaching effectiveness that incorporates a variety of measures, such as quality of course design, student products (e.g., creations, projects, papers), ratings by trained peers, teaching portfolios and so forth. Student ratings are only one source of data that my organization has long recommended count no more than 30 percent to 50 percent of an overall teaching evaluation. (For more on this, check out our research paper “Challenging Misconceptions About Student Ratings of Instruction.”) They must be combined with additional evidence and multiple sources of information as part of an ongoing process so that administrators can make an informed judgment about teaching quality.

Student voice matters. We need to take the time to listen.

Ken Ryalls (@IDEAPrez) is president of IDEA, a nonprofit organization dedicated to the improvement of learning in higher education.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up