Why Research Fails in the Classroom

Digital Learning

Why Research Fails in the Classroom

By Gates Bryant     Jun 16, 2016

Why Research Fails in the Classroom

Personalized learning is one of the buzzier buzz words in higher education. But it’s not a word without merit, and the motivation to bring personalized learning to campuses is getting stronger. So it’s not surprising that a profusion of products, broadly called courseware, aimed at facilitating personalized learning, has packed the higher education market. While the efficacy of personalized learning is proven, the products are often not. In a category like courseware, which is evolving rapidly and difficult to define, academic research and efficacy studies are vital to getting this proof.

Finding evidence may seem simple, after all it’s really about research and development, and other industries do this well. But in higher education the process of publishing academic research and its application in courseware product development as well as use in the classroom is disjointed. We need a better bridge between these two arenas in higher education to help faculty and instructional designers make more informed decisions around digital learning and to support companies in their quests to build better tools.

With that in mind, I sat down for a conversation with Larry Rudman of Kaplan, Inc. As the Vice President of Instructional Design and Research, Larry conducts, evaluates and implements research on learning science to improve Kaplan University’s courses and student outcomes.

Bryant: What are the key challenges for colleges and universities looking to apply the insights of academic efficacy studies to online learning environments?

Rudman: One of the greatest challenges is how many topics researchers study. What happens is one group follows a particular line of research, and another group follows a separate line of inquiry; each with a narrow focus. Rarely does research on two different topics build off of each other. So, you are left wondering how to take these two threads of thought and create a better learning experience for students.

The logical approach is to be as methodical as possible, taking one piece, implementing it and then measuring, adjusting and layering in new threads. Learning science findings are not cut and dry. After all, we are still fundamentally talking about measuring human interactions. Faculty have to synthesize them in the context of the student experience and the student herself.

Good point, we’re talking about real students in real classrooms. Higher education has the habit of categorizing a tested approach as a best practice, without recognition of that “lab”-to-real-life shift. How do these real learning environments change how research should be applied?

First, classrooms come in all shapes and sizes with different types of learners. So instructors have to adopt a researcher’s approach. They must conduct small experiments to find out how far to the right or left they can adapt the findings to achieve the same outcomes in their classrooms.

Take for example something we’ve been working on at Kaplan University: college-level composition. Most of the research on how to improve student writing has been done in a traditional, face-to-face setting. How do you translate that to an asynchronous environment?

What we did was take a sliver of the curriculum and adapt it based on principles we gleaned from the research on effective writing practices in traditional settings. Now, we are evaluating the impact on student writing for sections randomly assigned to the existing portion of our curriculum. And, we are comparing that to the modified version of our curriculum based on our review of the research.

Then there is often the challenge of insufficient details on the methods and procedures of a study. There is limited information on exactly what was done or how it was done so it requires extra digging. Fortunately, the research community is collegial and generally very helpful in tracking down those details, but it still requires extra work on the part of the faculty member. This process doesn’t scale well as we think about more faculty grappling with these tools.

What is the best way for companies to be sure they are focused on the right problem or measure?

Companies commonly start with the solution. Instead they should ask, “What is the problem we are dealing with?” or “What are students struggling with?” Starting there is critical because it informs how you will conduct the study, how you will measure it and how to make it mean something.

With measurement, it’s about understanding what you truly want to measure. Researchers continuously have to ask themselves three questions: 1) what specific thing am I measuring; 2) does it have real world applications; and 3) are the results a real reflection of impact on student learning? Without these questions, you can’t establish reliable measures, and thus can’t determine the impact of any change.

For example, if as a course developer you wanted to determine if your writing course improved the quality of students’ writing in the workplace, then you first need to have writing assignments in the course which reflect the type of writing you are targeting in the workplace. You need to demonstrate that student performance on the class writing assignments correlates with student writing on workplace assignments and is therefore a valid measure of workplace writing (validity). You also need to be able to consistently score each of the student papers with respect to a well-defined rubric. The scoring should be consistent across instructors and properly sequence the quality of student writing (reliability). When you are confident about the reliability and validity of your writing assignments then you can be more confident that changes you see because of newly devised instructional interventions under controlled conditions are the result of real improvements in learning.

The best thing to do is listen to the people who are closest to students. Some of the most effective solutions come from faculty members or instructors. Working with educators this closely throughout the formulation, execution and evaluation of research and product development is the surest way to make research and tools work in real classrooms.

This interview was edited for length and clarity.

Gates Bryant is a partner with Tyton Partners, an advisory firm that provides investment banking and strategy consulting services to companies, organizations and investors. The firm published a three-part series on courseware that enables digital learning in 2015. A new report on courseware in higher education will be available in the fall of 2016.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up