Research

How Intelligent Tutoring Systems Make Deep Learning Possible

By Kelli Anderson     Nov 26, 2018

How Intelligent Tutoring Systems Make Deep Learning Possible

For 30 years, the Harold W. McGraw, Jr. Prize in Education has been one of the most prestigious awards in the field, honoring outstanding individuals who have dedicated themselves to improving education through innovative and successful approaches. The prize is awarded annually through an alliance between The Harold W. McGraw, Jr. Family Foundation, McGraw-Hill Education and Arizona State University.

This year, there were three prizes: for work in pre-K-12 education, higher education and a newly created prize, for learning science research.

From among hundreds of nominations, the award team gave the Learning Science Research prize to Arthur Graesser, Professor in the Department of Psychology and the Institute of Intelligent Systems at the University of Memphis. Reshma Saujani, Founder and CEO of Girls Who Code, won the pre-K-12 award. The higher ed award honored Timothy Renick, Senior Vice President for Student Success and Professor of Religious Studies at Georgia State University. The three winners received an award of $50,000 each and an iconic McGraw Prize bronze sculpture.

At the University of Memphis, Dr. Graesser is developing intelligent tutoring systems (ITS), such as AutoTutor, a virtual tutor that helps students comprehend difficult concepts and manage their emotions as they tackle them. EdSurge spoke with him about ITS and how it encourages students to go beyond memorization and practice the concepts they’re learning. He highlighted crucial aspects of deep learning—why you don’t usually find it in traditional classrooms, how conflict and confusion can inspire it, why people don’t like it, and why it’s so important for today’s students to achieve it.


2018 McGraw Prize in Education winner Arthur Graesser.

EdSurge: Intelligent tutoring systems are computer systems that simulate human tutors by providing customized instruction and feedback to learners. What drove you to develop these systems? What was the problem you were trying to solve?

Art Graesser: A lot of the learning that goes on is shallow learning: memorizing things and being exposed to ideas, for example. But to put those concepts into practice, you need deeper learning. We have evidence that when you take a demanding test that requires reasoning, reading a book or listening to a lecture in preparation is no different than doing nothing. It’s not until you have an interactive learning environment that you can get to the deeper learning. That’s where intelligent tutoring systems come into play.

What are some of the challenges with deeper learning?

People don’t like it! Thinking hurts! When you get ratings from classes, students tend to like the easier classes with less challenging material. That’s where it’s important to track the emotions of the learner. We’ve identified what the learning-centered emotions are. With advances in educational data mining, we can infer learners’ emotions from their natural language interaction with systems, their facial expressions, even their body posture. So if a learner is frustrated or disengaged, you have to do something adaptively. One of my favorite emotions is confusion. Confusion predicts whether or not people are thinking.

You’ve said that one on-screen talking head in a tutoring situation is good, but two talking heads who are arguing with each other is better. Can you elaborate on that?

We have an intelligent tutoring system on the web that tries to teach learners difficult topics. One agent (or virtual onscreen person), which can be a peer agent, makes one claim and tries to justify it, and the other one might disagree with it. Sometimes the two agents can agree on information that is false and clashes with the knowledge of the learner. We’re trying to set up cognitive disequilibrium, where things don’t work as the learner might have expected. Things happen where there are contradictions and disagreements, and that leads people to think and reason, to resolve disagreements. That’s where a lot of the deep learning occurs.

Alternatively, if there’s a lecture and only one side of an argument is presented, students don’t think as much.

What are some distinct features of your tutoring system?

One thing we do is build agents that hold a conversation in natural language. The computer has to understand as best it can what the meaning is behind what the student said. That’s different than the standard delivery of text and multiple choice questions. This is more interactive, and it attempts to pick up emotions. One other aspect is that agents can model good thinking skills, good interaction skills, and being a good social partner.


Operation ARIES!, an educational game that teaches scientific inquiry skills to high school and college students, is powered by a form of AutoTutor.


What breakthroughs have allowed you to develop a technology that recognizes emotions?

The development of automated—or computerized—understanding of natural language is one breakthrough. You can detect a lot of the emotions by virtue of the way people interact in natural language. You can also get it from the timing. If you’re having a flow experience—where you are concentrating so well that time and fatigue disappear—natural language tends to be more coherent and timing is more fluid. When people are confused, there are often more interruptions and pauses. The other thing is facial expressions. In the future, we think a lot of computers and hand-held devices are going to be able to detect facial expression. You wear confusion a lot on your face.

What kind of outcomes have you seen with AutoTutor? What kind of impact on learners?

We have tested learning gains with AutoTutor in a variety of topics—computer literacy, physics, reading comprehension, research ethics, electronics. When you look at the pretest to posttest, the learning gains are approximately a letter grade. For comparison, if you test students who read a text for an equivalent amount of time, there’s no learning gain.

One thing I should say: when you talk about shallow knowledge—can the students recognize a concept—then reading a book is pretty good. That’s a lot of what our educational system does. Here, read a book, or listen to a lecture. That’s fine for entertaining or inspiring and shallow knowledge, but it isn’t deep knowledge.

We developed a tutoring program for struggling adult readers, and that population loves the agents. We have this game set up where the struggling adult reader competes with a peer agent. We program it intelligently so the human always wins. We have had individuals who just laugh at Jordan, the peer agent, and make remarks like, “How can Jordan be so dumb?” It’s that realistic to them.

As you’ve pointed out, for millennia people learned through apprenticeships. It’s interesting that technology is allowing us to go back to that approach.

That’s the vision, exactly. The normal lecture and read-a-book format is a product of the industrial revolution, where industry wanted people to be trained faster and be able to execute procedures. They didn’t need or even want people with deep knowledge.

But now we’re in a knowledge revolution where the workforce needs to have a deeper set of skills and more interdisciplinary and collaborative problem solving. It’s a different world.

Research

How Intelligent Tutoring Systems Make Deep Learning Possible

By Kelli Anderson     Nov 26, 2018

How Intelligent Tutoring Systems Make Deep Learning Possible

For 30 years, the Harold W. McGraw, Jr. Prize in Education has been one of the most prestigious awards in the field, honoring outstanding individuals who have dedicated themselves to improving education through innovative and successful approaches. The prize is awarded annually through an alliance between The Harold W. McGraw, Jr. Family Foundation, McGraw-Hill Education and Arizona State University.

This year, there were three prizes: for work in pre-K-12 education, higher education and a newly created prize, for learning science research.

From among hundreds of nominations, the award team gave the Learning Science Research prize to Arthur Graesser, Professor in the Department of Psychology and the Institute of Intelligent Systems at the University of Memphis. Reshma Saujani, Founder and CEO of Girls Who Code, won the pre-K-12 award. The higher ed award honored Timothy Renick, Senior Vice President for Student Success and Professor of Religious Studies at Georgia State University. The three winners received an award of $50,000 each and an iconic McGraw Prize bronze sculpture.

At the University of Memphis, Dr. Graesser is developing intelligent tutoring systems (ITS), such as AutoTutor, a virtual tutor that helps students comprehend difficult concepts and manage their emotions as they tackle them. EdSurge spoke with him about ITS and how it encourages students to go beyond memorization and practice the concepts they’re learning. He highlighted crucial aspects of deep learning—why you don’t usually find it in traditional classrooms, how conflict and confusion can inspire it, why people don’t like it, and why it’s so important for today’s students to achieve it.


2018 McGraw Prize in Education winner Arthur Graesser.

EdSurge: Intelligent tutoring systems are computer systems that simulate human tutors by providing customized instruction and feedback to learners. What drove you to develop these systems? What was the problem you were trying to solve?

Art Graesser: A lot of the learning that goes on is shallow learning: memorizing things and being exposed to ideas, for example. But to put those concepts into practice, you need deeper learning. We have evidence that when you take a demanding test that requires reasoning, reading a book or listening to a lecture in preparation is no different than doing nothing. It’s not until you have an interactive learning environment that you can get to the deeper learning. That’s where intelligent tutoring systems come into play.

What are some of the challenges with deeper learning?

People don’t like it! Thinking hurts! When you get ratings from classes, students tend to like the easier classes with less challenging material. That’s where it’s important to track the emotions of the learner. We’ve identified what the learning-centered emotions are. With advances in educational data mining, we can infer learners’ emotions from their natural language interaction with systems, their facial expressions, even their body posture. So if a learner is frustrated or disengaged, you have to do something adaptively. One of my favorite emotions is confusion. Confusion predicts whether or not people are thinking.

You’ve said that one on-screen talking head in a tutoring situation is good, but two talking heads who are arguing with each other is better. Can you elaborate on that?

We have an intelligent tutoring system on the web that tries to teach learners difficult topics. One agent (or virtual onscreen person), which can be a peer agent, makes one claim and tries to justify it, and the other one might disagree with it. Sometimes the two agents can agree on information that is false and clashes with the knowledge of the learner. We’re trying to set up cognitive disequilibrium, where things don’t work as the learner might have expected. Things happen where there are contradictions and disagreements, and that leads people to think and reason, to resolve disagreements. That’s where a lot of the deep learning occurs.

Alternatively, if there’s a lecture and only one side of an argument is presented, students don’t think as much.

What are some distinct features of your tutoring system?

One thing we do is build agents that hold a conversation in natural language. The computer has to understand as best it can what the meaning is behind what the student said. That’s different than the standard delivery of text and multiple choice questions. This is more interactive, and it attempts to pick up emotions. One other aspect is that agents can model good thinking skills, good interaction skills, and being a good social partner.


Operation ARIES!, an educational game that teaches scientific inquiry skills to high school and college students, is powered by a form of AutoTutor.


What breakthroughs have allowed you to develop a technology that recognizes emotions?

The development of automated—or computerized—understanding of natural language is one breakthrough. You can detect a lot of the emotions by virtue of the way people interact in natural language. You can also get it from the timing. If you’re having a flow experience—where you are concentrating so well that time and fatigue disappear—natural language tends to be more coherent and timing is more fluid. When people are confused, there are often more interruptions and pauses. The other thing is facial expressions. In the future, we think a lot of computers and hand-held devices are going to be able to detect facial expression. You wear confusion a lot on your face.

What kind of outcomes have you seen with AutoTutor? What kind of impact on learners?

We have tested learning gains with AutoTutor in a variety of topics—computer literacy, physics, reading comprehension, research ethics, electronics. When you look at the pretest to posttest, the learning gains are approximately a letter grade. For comparison, if you test students who read a text for an equivalent amount of time, there’s no learning gain.

One thing I should say: when you talk about shallow knowledge—can the students recognize a concept—then reading a book is pretty good. That’s a lot of what our educational system does. Here, read a book, or listen to a lecture. That’s fine for entertaining or inspiring and shallow knowledge, but it isn’t deep knowledge.

We developed a tutoring program for struggling adult readers, and that population loves the agents. We have this game set up where the struggling adult reader competes with a peer agent. We program it intelligently so the human always wins. We have had individuals who just laugh at Jordan, the peer agent, and make remarks like, “How can Jordan be so dumb?” It’s that realistic to them.

As you’ve pointed out, for millennia people learned through apprenticeships. It’s interesting that technology is allowing us to go back to that approach.

That’s the vision, exactly. The normal lecture and read-a-book format is a product of the industrial revolution, where industry wanted people to be trained faster and be able to execute procedures. They didn’t need or even want people with deep knowledge.

But now we’re in a knowledge revolution where the workforce needs to have a deeper set of skills and more interdisciplinary and collaborative problem solving. It’s a different world.

Next In Research

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up