Thanksto the Internet, we’re fortunate to have something that all prior societies havelacked: A near-infinite number of people willing to explain why you’re doingsomething the wrong way.

Butdespite our wealth of naysayers, there is still significant value in knowinghow to catch your own mistakes as they’re happening. Making accurate real-timeevaluations about what you’re doing improves performance, and that couldultimately produce more time, money, and positive social interactions.

Whenit comes to solving math problems, students often fail to catch their ownmistakes. In many of these situations the problem isn’t a lack of knowledge.Students understand the concepts involved in arriving at the correct solution;they just fail to recognize a certain error.

Oneexplanation is that students still haven’t developed proper monitoring skills.For such students, attempting to solve a problem and monitor their work at thesame time creates cognitive overload. It’s too difficult for them to reasonabout the solution and carefully monitor the automatic calculations they’redoing.

Mightthere be a way for students to develop and improve their monitoring skills? Someresearch suggests that students have an easier time when it comes to finding themistakes of others, and so monitoring a third party could be a good way to helpstudents develop monitoring skills. But there are social and logisticalproblems with having students monitor each other. Even adults are madeuncomfortable when they have to work with somebody peering over their shoulder.

Couldtechnology provide a solution? Monitoring the work of a computer avatar would avoidthe social and physical complication of students observing each other, and it wouldalso allow the process to be personalized.

Anew study by Sandra Okita of ColumbiaUniversity’s Teachers College takes a look at whether or not such a technologicaltool can be effective. In two experiments she examined 4th, 5th,and 6th graders from two low-income New York City schools asthey worked on sets of math problems. The basic structure of both experimentsinvolved one group of students who worked through problems on their own (thecontrol group), and a second group of students who encountered a dinosaur named“Projo” with whom they took turns solving the problems. Thus, the latter groupcould observe and potentially stop and correct the actions being taken by somebodyelse.

Thefirst experiment featured 40 students working in a learning environment called“Doodle Math.” The second experiment involved just 22 students, though theyspent more than twice as much time working on problems as students in theinitial experiment. The latter experiment used an environment called “PuzzleMath” that tended to have sub-problems that made up a larger puzzle. In bothexperiments the learning environment tracked student activity with a logfile.

Okitaexamined the frequency and accuracy with which students corrected their ownwork (experiment 2). For the problems done by Projo, she also looked at whetherexperimental group students were more likely to correctly evaluate Projo’s work(i.e. correctly say if it’s right or wrong) than control group students were tocorrectly solving the same problem on their own (experiment 1). In addition, inboth experiments students completed pre- and post-tests to provide a measure ofwhether they improved their skills over the course of the experiment.

Theresults suggest that technological tools can potentially be an efficient andeffective way to teach students monitoring skills. Students who monitored Projohad more cases of self-correction on problems they did themselves, and thosecorrections were more accurate than the corrections of students in the controlgroup. Students in the experimental group also did better that those in theexperimental group on Projo’s problems (i.e. correctly evaluated Projo moreoften than control group students solved the problem.)

Inboth experiments students who reviewed Projo’s work also showed moreimprovement on the post-test relative to students in the control group,although in experiment 1 the difference did not quite reach statisticalsignificance, and in experiment 2 the gap was only statistically significantwhen it came to problems focused on calculations rather than rules.

Thefindings are based on a small sample and they’re far from conclusive, so morework must be done to establish whether tools like Projo actually accomplishwhat they set out to do.

Butthe study provides a good illustration of how technology can fill a very smallbut very important niche. There’s a specific skill the conventional classroomtends to miss--real-time monitoring--and a highly focused computer program hasthe potential to teach it extremely efficiently.

Oftenthere’s an all-or-nothing framing when it comes to technology in the classroom.Okita’s research is another reminder that even in situations where alarge-scale blended learning environment is unfeasible or undesired, there maybe a place for limited technological tools that focus on teaching neglectedskills. Add up all these types of piecemeal advances and you start to make areal difference.