It’s hard to spend more than a few minutes in the world of educational gaming without hearing the word “engagement.” Sometimes those who use it are careful to be specific about what it means, but often the term is casually mentioned as an accepted ingredient in the secret sauce used to produce learning.
Such casual usage tend to obscure the fact that researchers and practitioners continue to lack a consistent way of defining it, a consensus on how to measure it, and a specific set of outcomes that are expected to result from it.
A recent special issue of the journal Simulation and Gaming threw itself headfirst into the engagement quagmire in an attempt to shed some light on what we know, what we don’t, and what we should attempt to change regarding our usage of the word.
The most striking findings come from a study that involved 13 researchers from five institutions. Across two experiments the researchers examined what five different measures of engagement revealed about three factors believed to influence engagement. In both experiments students played a point-and-click 2D puzzle game. But in the first one researchers varied the level of character customization (allowing students to select their character’s name gender, shape and wardrobe, for example). In the second experiment they varied the complexity of the game narrative (some players read a long backstory about their character and mission) and the artwork (others saw more colors and richer textures). In both experiments the researchers attempted to quantify engagement with five different measures:
Four sets of self-report items
Analysis of player videos (e.g. time spent looking away from the screen)
A physiological measure of electro-dermal activity or skin response
Overall, there was little relationship between the five indicators. In the first study there were no statistically significant correlations between any of the five measures, and in the second study there were only weak correlations among some of the measures.
In some cases different even measures led to opposing conclusions. For example, a rich narrative--in which players were given a lengthy backstory--was found to be more engaging when combined with rich art, but only on the skin response measure. On the other hand, light narratives--those without the character backstory--were found to be more engaging based on the self-reported measures of engagement. Overall, the study suggests any reported effect of engagement--or lack thereof--could simply be a result of the method used to measure it.
The findings provide a note of caution for anybody attempting to draw strong conclusions from measures of game engagement. So what can be done to bring some clarity to the concept of the word?
One strategy is to try to redefine or constrain what engagement may refer to. In their contribution to the special issue, Michael Filseker and Michael Kerres of the University of Duisburg-Essen emphasize making a distinction between engagement and motivation. They call for engagement to specifically refer to post-decisional or “volitional” processes--that is, the management and implementation of intentions, rather than the formation of intentions.
Nicola Whitton and Alex Moseley, two researchers from the UK, propose a similar distinction between what they call superficial game engagement and deep game engagement. In their model, superficial engagement is composed of participation and attention--logging on and looking at the screen, for example--while deep engagement involves four dimensions: a sense of captivation, an emotional pull, a feeling of belonging, and a sense of being part of the activity. Whitton and Moseley believe that these two categories and six dimensions can provide a simpler framework for talking about engagement.
At this point you may be thinking that getting everybody to agree on new definitions of engagement is a fool’s errand. And you may be right. There is, however, a third way of dealing with the vague nature of engagement: Do away with the term and focus on the specific outcomes it’s supposed to represent.
For example, if increased engagement with a math game is supposed to improve achievement, then simply evaluate whether it improves achievement. Similarly, if a game is supposed to improve a student’s emotional response to a particular subject, or increase the time they spend learning the subject, then simply evaluate emotional responses and time on task. If an educational game is ineffective, it shouldn’t have the opportunity to hide behind a vague claim that it still increases engagement, and if a game is effective, it shouldn’t need claims of increased engagement to demonstrate that effectiveness.
None of this is to say that it’s not important to know why a game produces positive outcomes. And there are still times when referring to engagement may be the best way to talk about a particular process or outcome.
However, it’s clear that merely explaining a game’s effectiveness with “engagement” will often be unhelpful. Given the rapid pace of technological improvements, we may soon have measures of engagement that can truly reveal what a learner is going through. But until then perhaps it’s best to think twice before using the word. If there’s a way to say what you mean without throwing “engagement” out there, that’s probably the way to go.
Stay up to date on edtech.Sign up to have top stories delivered weekly.