Dept of Ed Report Encourages Sharing Across Disciplines

Dept of Ed Report Encourages Sharing Across Disciplines

By Andrew Plemmons Pratt     Mar 19, 2013

Dept of Ed Report Encourages Sharing Across Disciplines

Late in December, the U.S Department of Education’s Office of Educational Technology dropped a 100-page draft policy report on “Expanding Evidence Approaches for Learning in a Digital World.” While a key focus of the report is on the kinds of information that we should marshal to evaluate learning technologies, the more important lesson of the document is about people. Through case studies and reviews of current research, the report makes a lot of recommendations, but three stand out.

This final installment in a three-part series highlights the last of those recommendations: a need to forge stronger lines of communication between education researchers, technologists, and educators, and to share insights that might otherwise remain siloed in existing disciplines. For more background on “evidence-based” research and a deep-dive into another major recommendation-- the notion that technologists, educators, and researchers must collaborate across their respective field-- refer to Part I of the series. Or skip over to Part II of the series which makes a case for utilizing design thinking and continual improvement processes while implementing edtech.

Addressing the Information Vacuum

The two previously highlighted ideas from “Expanding Evidence” are about collaboration across disciplines, but there’s are third facet of idea-sharing that the report argues for explicitly while simultaneously demonstrating the power of idea pollination across disciplines.

Recommendation #2 is that technology developers should use established research and theory from learning sciences as “the foundation for designing and improving digital learning resources.” The recommended route to this use of learning research in technology design requires that education researchers “make compendiums of research-based principles for designing learning systems widely available, more understandable, and more actionable for learning technology developers” (p 89).

Fortunately, the report itself is an admirable example of just such a compendium. The stated audiences for the document are listed on the project’s homepage: district and school leaders, teachers, learning-technology developers, learning-technology researchers, and learning technology R&D funders. In collecting ideas for that cross-section of the education sector, the report summarizes an immense body of existing research that any individual reader might not encounter otherwise. Just as design-based research requires continuous improvement, these proposed “compendiums” of research will have to continuously update specialists about work outside their professional fields. In the meantime, diligent developers, researchers, and teachers can look to some case studies in the report and, from there, continue their own cross-disciplinary reading.

The entire third chapter covers the potential for combining data from traditionally siloed systems to better address student needs. One fruitful case study focuses on the Youth Data Archive, or YDA, a project of the Stanford University John W. Gardner Center for Youth and Their Communities. The YDA knits together information on individual young people from “school districts, community colleges, local health departments, county offices of education, human services agencies, recreation and parks departments, and youth-serving nonprofit organizations” (p 40). In one analysis, YDA researchers found that the strongest predictor of future school absenteeism was prior chronic absenteeism—it was stronger than data on past suspensions or demographic information like ethnicity or family income. Chronic absenteeism in middle school was, in turn, highly correlated with lower high school math achievement. Those insights have the potential to focus support where it is most needed. Working separately, those institutions might not have been able to identify the strongest potential area for intervention, and continued their work in an information vacuum.

Research That Matters

But while making more coordinated use of traditional school and community-level data to support students is one important example of cross-pollination, so too is the introduction of emerging psychological research into traditional teaching settings. In another case study, the report authors examine work on the role of emotion in student learning. “Extensive research has shown a relationship between students’ conception of intelligence as fixed or expandable and how they view success and failure as having an influence on the learning challenges they will seek,” they explain. In an experiment using a software tutor for teaching geometry and statistics, University of Massachusetts researchers built a system that could monitor a student’s emotional state and provide adaptive computer-generated prompts to help learners overcome feelings of frustration and boredom (p 29).

Later on, the report authors gesture to path-breaking work on the study of grit and other “non-cognitive” character traits as necessary complements to cognitive ability in academic achievement. They write, “we know that personal qualities related to intellectual curiosity, persistence, motivation, and interests can be just as important as subject matter knowledge in shaping students’ lives (Almlund et al. 2011)” (p 52). This is the very research that Paul Tough explains lucidly in his brilliant book, How Children Succeed: Grit, Curiosity, and the Hidden Power of Character, published just four months before the release of “Expanding Evidence.” Recommendation #12 is to allocate R&D funding for research on these non-cognitive skills, or what KIPP schools call “character skills.”

Where the Rubber Meets the Road

Any report on digital learning technologies would be incomplete without a discussion of artificial intelligence, and the example offered from the Navy’s Information Technology Specialist training program is particularly potent. The Defense Advanced Research Projects Agency, or DARPA, went to work on a significant problem the Navy faced: it needed to train highly effective IT specialists who could solve complicated technology problems aboard ships deployed around the world. To get IT Specialists to adequate levels of training previously required highly experience instructors, a significant amount of classroom time, and then several additional years of on-the-job training. The process was slow, expensive, and dependent on limited human resources. So DARPA created a small ideal training program with elite instructors and a small cohort of trainees. The program underwent rapid-cycle testing and iteration (design-based research) until its new graduates could outperform experts with years of experience. DARPA then built a Digital Tutor, or DT, program that could mimic the abilities of the elite trainers in the live-instruction pilot program.

The next round of trainees used the Digital Tutor instead of the live instructors, and the results were astonishing: “Students who had completed the 16-week DT program outperformed both graduates of the traditional 35-week IT training program and fleet IT experts with an average of 9.1 years’ experience in a series of practical exercises, network-building tasks, and interviews conducted by a Review Board” (p 27). This is an impressive result demonstrating the potential for focused applications of artificial intelligence to train novices to solve complex problems. The application is narrow, and the use involved students who have already demonstrated a high level of commitment and learning stamina (they’re training to become military IT specialists, after all). But it should give educators pause to think about the powerful potential support that customized digital tutors could have in specific contexts. And before jumping to the conclusion that AI should replace teachers, the report authors also cite research indicating that powerful results come from careful integration of digital support systems into traditional instructional settings: “In fact, studies have shown that students taught by carefully designed systems used in combination with classroom teaching can learn faster and translate their learning into improved performance relative to students receiving conventional classroom instruction” (p 26).

Indeed, that’s the potential of digital learning technologies in a nutshell: they don’t replace teaching; they provide powerful tools for making it better. I want to underscore the emphasis on better, because it reinforces the idea of improvement. When it comes to education, the entire point of gathering evidence is to improve student outcomes. The point of identifying new kinds of evidence is the same. But “Learning in a Digital World” demands working both together to innovate and accepting the idea that our first try at any educational tool won’t be our best. With the right evidence, we can innovate our way to something better. Then we can do it again.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up