Why Education Is a ‘Wicked Problem’ for Learning Engineers to Solve

Artificial Intelligence

Why Education Is a ‘Wicked Problem’ for Learning Engineers to Solve

By Rebecca Koenig     Apr 2, 2020

Why Education Is a ‘Wicked Problem’ for Learning Engineers to Solve

This article is part of the guide: Better, Faster, Stronger: How Learning Engineering Aims to Transform Education.

Ashok Goel has a vision for the future of education. The professor of computer science and cognitive science at Georgia Institute of Technology believes every student and researcher should have access to artificially intelligent assistants that not only help them study facts and figures, but also collaborate more closely with other humans.

Goel has already helped to build such tools for use in his own classroom and worldwide through collaborations with the Smithsonian Institution. He hopes AI bots—such as his most well-known, a virtual teaching assistant named Jill Watson—may help to ease the entrenched tensions that make education such a “wicked problem” to solve.

In a recent conversation with EdSurge, Goel discussed how learning engineering researchers are applying artificial intelligence to help make high-quality education accessible, affordable and effective on a grand scale. The professor also acknowledged the many ethical questions raised by such efforts.

The interview has been lightly edited and condensed for clarity.

EdSurge: How would you define “learning engineering?”

Ashok Goel: Learning engineering certainly is a term in vogue now. We usually have been calling it LST—learning science and technology—but it's the same idea.

It's the engineering of learning. And the idea is really quite revolutionary, in a way, in the sense that we have always tried to model how people learn—we already tried to build technologies that can help people learn—but we have never viewed learning, until recently, as something that you could completely engineer.

So, in a way, the term reminds me of another term that came into vogue about 20 years back called “cognitive engineering.” It was a very similar idea. In cognitive science, we can model human cognition, or we can build AI machines that are cognitive in some sense, but cognitive engineering was a term [for the idea] that we can build human-machine interactions that augment cognition in some way.

Learning engineering has a flavor of that. The goal here is can we really engineer learning in a way that we get the results that we want to get.

What problem is learning engineering trying to address? Is there a worry that learning isn't happening to its full capacity, or not for everybody?

Goel: Education is a wicked problem. “Wicked problem” is a technical term initially proposed in the context of public policy in the late 1970s. There are some classes of problems that are really wicked, and what makes them wicked is they have multiple goals and these goals are in conflict with each other. If you try to achieve one goal, you end up doing worse for another goal.

So education is a wicked problem because you and I can think of multiple goals which are in conflict with each other. We want education to be accessible. We want it to be affordable by everyone. We want it to be achievable, by which I mean, if I register for a class, I should be able to achieve the goals that I want to achieve. Even if it's accessible and affordable, but I can't achieve it, it's not of much use to me. At the same time, we want learning to be very efficient. I should be able to learn what I need very quickly. And it should be very effective in the sense I can make use of it.

The difficulty is we know how to make it very efficient, very effective. All we have to do is to do it individually—one-to-one tutoring—and we know that works very well. But that breaks the point about accessibility [and] affordability, because we cannot have one teacher for every student for every subject in the world. It's just not going to happen.

So those two sets of goals are in conflict. Accessibility [and] affordability is in conflict with efficiency and effectiveness. That's what makes it wicked.

So, the question really becomes, is there some way in which we engineer learning in such a way that we can achieve these multiple goals simultaneously? And no one has quite figured it out yet.

Some approaches to these problems may be policy-related, like funding or charter schools versus public schools, but it sounds like some of them are related to course design, or are technology-related?

Goel: There is another tension. What makes education really wicked and then connects with the two points you are making about policy and technology is that, on one side, when learning occurs, it occurs one-to-one. So, there is a teacher, there is a student.

On the other side, learning is a fundamentally social process. Most of our learning is from our parents, by observing. Some people believe that we learn everything we need to learn by the age of six, just by observing our parents. You don't go to school to learn how to tie a shoelace. But in some sense, tying my shoelaces and brushing my teeth are perhaps two of the most important skills that I've learned. And my parents taught me when I was a little boy. So, a lot of learning is social.

And the question is, how does technology help with social learning also? And the difficulty is that most of the work on technology does not take into account the social learning aspects. If you look at almost all of the work coming [out] on so-called cognitive tutors, or intelligent tutors, you put a child in front of a machine and you expect your child to learn. And that works in very well-defined domains like arithmetic and algebra. It does not work in more open-ended domains like, let's say, sociology, psychology or philosophy, which inherently require discussion, inherently require give and take and learning from other people's stances and arguments.

Where does your work fit into solutions to these problems?

Goel: We have not one but several projects. The one that you probably know the most about is about Jill Watson, because it has become famous. I was teaching this online class, there were hundreds of students taking it, there were thousands of questions. My teaching team and I just didn't have the time to answer those questions. So we built this AI agent that could automatically answer some questions, which was good.

One difficulty that happened is that it took us to about 1,000 person-hours to build the first Jill Watson. That's because we didn't quite know what we were doing. It was a research project. We were just sort of wandering around trying to think about how we can make it all work and whether it will actually work. But no teacher is going to put in 1,000 person-hours of his or her time in order to get a benefit of 200 person-hours that he or she may save by not answering some questions. The ratio just wasn't right. As a research project, it was good. But it's not something I could hand over to you or to some colleague and say, go run it in your class.

Now, we are building a new technology that we call Agent Smith. It's another AI technology— and we're very excited about it—that builds [a] Jill Watson for you. And Agent Smith can build a Jill Watson for you in less than 10 person-hours.

But another technology that we are very excited about is a virtual experimentation researcher system. Another thing that happens in online learning is that if you're in a university, like Georgia Tech or any other major university, then you have access to a lot of labs—biology, chemistry, physics—and those labs are really important. But if you're an online student, you don't have access to any lab. Then your education—we're talking about effectiveness of education, effectiveness of learning, quality of learning—we cannot guarantee that quality because you don't have access to labs.

So what the virtual experimentation research assistant allows you to do—VERA—is to generate ideas and test them. So, you might say, “Why are starfish dying across the West Coast of the United States?” and you can generate a model.

Now, model simulation tools have been around for a very long time; for at least a generation, if not more. The difficulty is [that] to build simulations, you need to know mathematical equations or computer programming. In VERA, there is an AI compiler. If you build a model, it automatically sets up the simulation for you. So you don't have to know any programming or mathematical equations. And then you can look at the results of this simulation.

Is the purpose of building tools to improve online education because that is the most accessible and widespread kind of education, as opposed to requiring students to come to a classroom?

Goel: So, back to the wicked problem: How do we make education that's both quality education and at the same time accessible and affordable? Brick and mortar places like Georgia Tech are very good because they have quality education. But a Georgia Tech, it is neither accessible nor affordable. And it's not just in the U.S. I'm originally from India. When I think about India, most people in India cannot even dream of affording Georgia Tech. So what do we do? How do we help everyone?

Online education, to me, is one way of perhaps trying to address the wickedness of this problem. It is accessible and affordable on one side, but the question is that, in online education, quality hasn't always been that high.

From my perspective, learning engineering is one way of seeing whether we can raise the quality that's comparable to that of residential education, and yet keep it accessible and affordable. I don't think we're there yet.

A lot of people have recognized by now the idea that MOOCs—you just put it online, people will come, they will learn—didn't quite work out for various reasons. So keeping that idea of open education but tinkering with it, adding these tools in different ways to make it more high quality, makes sense to me.

Goel: When we go to a physical classroom, students hang out with each other, they form study groups, they work with each other, talk to each other, share notes, they ask each other questions. If someone doesn't understand my class today, they can turn to someone else and say, “I don't know what he was talking about today, can you help me?”

So one question for online education is, can we build a new set of tools—and I think that's where AI is going to go, that learning engineering is going to go—where AI is not helping individual humans as much as AI is helping human-human interaction.

If you think of Facebook … I work with Facebook one-on-one, Facebook is in front of me and on the screen, I'm working on it. But truly, I use it to connect with other people. I don't care about Facebook, but I care about my family and friends and contacts. That's what I'm interested in.

So we have now built two technical social agents. As students introduce themselves, the Jill Watson social agent is trying to find connections and say, “Hey, you like to play chess, and you like to play chess,” and it’s trying to build a buddy system.

This also raises all kinds of ethical issues. Issues of student privacy. Do I have the right of looking at your introduction and someone else's introduction and connecting you? That might work out, that might not work out.

What are the ethical components of all this?

Goel: Huge ethical issues and something that learning engineering has not yet started focusing on in a serious manner. We are still in a phase of, “Look ma, no hands, I can ride a bike without hands.” We're just so fascinated by our own small things that we haven't started thinking about the possibility that we're going to fall and hurt ourselves or hurt someone else.

There's also the issue of security. We are collecting all of this student data. How do we guarantee its security over long periods of time—not just this semester, but from five years from now or 10 years from now? Students, sometimes, say or do things which are not the smartest things. They wish they [could take] it back. They wouldn't say those things if they knew some machine was watching them.

A whole bunch of problems. Are we entering into the sort of the Big Brotherhood of learning?

There's not only a question of mistrust or distrust, there's also a question of over-trust. The one thing we observed is that a student uses Jill Watson in one class, and one year later, the same student takes another class and Jill Watson is now in that class also, because now Jill Watson by now it is running in about 17 classes, no longer my class alone. And in the meantime, in one year, Jill Watson has improved because we are constantly improving. So the same student looks at Jill Watson today and looks at Jill Watson next year and sees it doing better. And students have started saying, “Wow, this is going on this upward trajectory. Soon, this is going to be like human teaching.” When in fact, I know we are nowhere close to that. But because I see this positive trajectory, it's almost like Jill Watson can never be wrong, it can never give a wrong answer, which, of course, is untrue.

What kind of mental models do humans build of AI agents? You can build an agent and introduce it, but humans will immediately build a mental model of that AI agent. And the mental model is not going to be the one that you, as a designer, has. I can think whatever I think of Jill Watson, but once we introduce it, if human beings are interacting with it, they build a mental model of Jill Watson which is completely different from my mental model. And what we have found is that their expectations of Jill Watson are like sky-high. That Jill Watson can answer any questions: “What is the meaning of life?” We need to manage the expectations of humans so that they ask just the right kind of questions.

So you teach the tool how to deal with humans, and then you teach humans how to deal with the tool?

Goel: It's a very good way of putting it. I'm going to build on this idea because learning engineering has not yet started exploring it. We are thinking right now of only one side of the coin, of the two-sided coin that we just talked about. We are thinking about how we can use machines to help humans. We have not yet started thinking about how will humans react to those machines? And what do we need to teach humans about those machines so that the human-machine collaboration is an effective one?

Now, that is a challenge, but it's also an opportunity. Most people are scared of AI because they don't understand it. But by introducing AI in various environments, and helping humans understand AI, we can also help humans learn about AI so that they become comfortable with it. But that means we have to explicitly consider that as another research problem. How do we help humans learn about AI so that it can work?

So, I think there is another problem with a lot of AI work in learning engineering. Everyone is focusing on architectures and algorithms and data. And that's important. But what people have not necessarily focused equally well—or deeply—is on interactions AI and humans will have with each other. And that's where the kind of questions you're raising come in.

Who needs to come into the learning engineering conversations to make sure this happens?

Goel: That's a great question. One methodology of research is called participation design research, in which you take all these stakeholders and bring them to the table, and they're building tools together. For example, if it's a middle school classroom, [design it] with middle school teachers, middle school administrators, parents, students, not just AI researchers thinking about what will be cool in building AI, which is the way it has been happening so far. So, we have started doing a little bit more of it, but we are still not where we should be.

This has to be participatory, otherwise, we're going to zigzag and not really solve the wicked problem.

Technology always has politics, in the sense that there are power relationships in society, including the classroom. And technology can either disrupt that politics, or it can reinforce the politics. So how do we build technology with the right politics? Not that I have an answer to what the right politics is, but that over time tends to respect inclusion and diversity and equality and equitability and values like that.

I don't think in learning engineering [that] we have yet tried very hard to either enunciate what those values are or to think about how do we operationalize those values into the technology that we are building.

Technology should not be left to technologists.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Better, Faster, Stronger: How Learning Engineering Aims to Transform Education

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up