Community

What Do Edtech and IKEA Have in Common? Persuasive Design.

By Jeffrey R. Young and Sydney Johnson     Oct 2, 2018

What Do Edtech and IKEA Have in Common? Persuasive Design.

Technology shapes the way we interact everyday. We FaceTime with family across the country, we send snaps to our friends to let them know where we are and what we're doing.

But sometimes we fail to realize that the platforms and data that push us to interact don't always do it in objective ways. Our interactions are increasingly shaped by algorithms, and those codes are designed by humans—people who literally write the script for the ways that tech will make us tick, for better or for worse.

The practice of intentionally guiding user behavior is known as "persuasive technology," and it’s making its way into our phones, our homes and our schools.

This week on the EdSurge On Air podcast, we talk with three experts who study persuasive tech, behavior design, and the ways that algorithms behind technology and search engines can leave damaging effects on society and further exacerbate social inequalities.

This episode has been an experiment in format, and we’d love to hear your feedback. Email Sydney@EdSurge.com or Jeff@EdSurge.com. Listen below, or subscribe to the EdSurge On Air podcast on your favorite podcast app (like iTunes or Stitcher). Highlights from the conversation below have been lightly edited and condensed for clarity.

The best way to think about persuasive technology is to start with the iconic furniture store Ikea. Sandra Burri Gram-Hansen, an assistant professor of communication and psychology at Aalborg University in Denmark, says whenever she wants to kick-start a class discussion on persuasive design, she starts with examples of how that retailer guides its customers.

Burri Gram-Hansen: When you enter IKEA, there's always a big map of the Ikea store. It's usually blue, and it'll have yellow footprints or something on it. But the map is always made in a way that it seems like it's such a short walk; you can get from the entrance to the kitchen just by twenty little dots on the map. But realistically it takes you 40 minutes to get to that place. But they make the complex task feel easier by reducing it and making it easier for us to get there.

They've also got the principle of “tunneling” going on, with all of the lighted arrows on the floors. Of course you could go wandering out on your own, but it's easier to stick with the tunnel and go the way the arrows are pointing.

So they have all of these things going on in the store, and you just go with it. You don't see anyone running in the wrong direction in IKEA.

So that's in a physical store. But the same principles can be applied, and even more powerfully, in the digital realm. One person studying the way data and tech design can influence behavior is Margarita Quihuis, a behavior designer at Stanford University's Peace Innovation Lab, where researchers are looking for ways to harness behavior design in a way that promotes peach. In other words, can data and tech be designed to steer individuals—or our whole society—in a better direction?

Quihuis: The field of behavior design [was] started at Stanford under B.J. Fogg. He's also the father of persuasive technology. So when you think about how technology like cellphones change behavior at scale, this was research that he was doing at Stanford in the 90's. And at that time his PhD thesis, he called it “captology,” because we're captive of technology. And then the name evolved to be persuasive technology.

And then the iPhone came out, and we know how our lives have changed in the last eleven years with the advent of the iPhone. We can see how people have adopted behaviors that weren't possible before. Anywhere from how our attention has changed with technology to the risk of wearables.

Well if you can do that with inane behaviors, with silly behaviors, could we also do that in a way that would promote peace. So Mark Nelson and I started down of this path of saying, "Well, what is peace in this age—this digital age?" And we determined that it was really about how good can we be to each other, and how good can we be to each other through mediated technology. And it could be some sort of difference boundary whether it's gender, or race, or nationality, or language, or anything. Because we live so much in a digital world, the way we design the software and how we design the technology, all of a sudden has an impact on how we interact with each other.

That has optimists feeling hopeful about technology in the future. But plenty of others are concerned about what or who gets to design "good." Safiya Noble is an assistant professor at the University of Southern California and author of the book, Algorithms of Oppression. She studies the way technology and algorithms can worsen social inequalities and prejudices.

Noble: I am interested in things like who controls the narratives that have influence in our society, particularity with large digital media platforms. And it's for this reason that I was tracing what was happening with key issues of representation, ideas of fairness around communities and found what I felt were harmful ways that these technologies influence ideas about people.

In the realm of persuasive technology, it's interesting, because I think that the public is not particularly aware that they are being persuaded. And of course the news of Cambridge Analytica using people's psychographic data, I think that came as a huge shock to most people.

In many ways I think that is business as usual. I mean, Cambridge Analytica is just one of many firms that are interested in persuading people to buy a product or be interested in trying a new service. The traditional model of advertising is really what undergirds most of the digital technologies that we are engaging with.

In higher ed, we see these sorts of quiet cues all the time. They mostly take the form of nudges, those text messages or emails sent to students reminding them about important upcoming deadlines or financial aid or just to check in on their grades to see if they're dipping and flag them if they are. But Sandra Burri Gram-Hansen, the professor in Denmark we spoke to, argues that nudging doesn't necessarily make a lasting impact once those nudges have stopped.

Burri Gram-Hansen: I think the main difference is that persuasive technologies have focused on continuous behavior change, while nudging has a stronger focus on momentary behavior change. And it's the moment you are confronted with the nudge then you are nudged into doing something specific. People are nudged into little things like, for instance, waiting in line the right way in the Ikea restaurants. But the thing is, because they don't process what they're doing, that doesn't mean that they're all of a sudden really good and well behaved while waiting in line to get on the bus or catch a train. Because they don't process the activity they're actually doing it, they just do it.

Noble, the professor at USC, goes one step further, arguing that in education, nudges aren't only influencing student behavior, these platforms and technologies are working two ways. They shape the perceptions and behaviors of staff and faculty as well.

Noble: We know that, for example, admissions officers and offices on university campuses are engaging with large-scale databases that help optimize decision making for them. And I think that we can think of that as kind of an analysis of a lot of different kind of factors that students report out on about their activities and grades and test scores and who they are and keywords they might be using in their essays—and then delivering a set of recommendations or ideal candidates to universities.

This is where I think of really troublesome ways of thinking about admitting students to a university. Because, of course, there are plenty of people who should go to college and who are brilliant and who may not have the right “key words” so to speak in their backgrounds. That can't be optimized for a database looking and trying to do analytics on their profiles. So these kinds of profiling systems that we see in higher ed definitely require a lot of more study. They are certainly persuading the decisions around admissions.

Proponents of persuasive design think the better solution is to teach users about how technology influences them.

Burri Gram-Hansen: Don't get me wrong, it's not about teaching everyone how to be able to design persuasive technologies. But it is making sure that people—especially younger people—understand the principles of what these different systems are doing. Because if they understand the principles of persuasive systems, then that also means that they are able to reject unwanted influence. And that pretty much means that they are still in control.

But what about those potential good applications? Could there be a way that behavior design could push a faculty member to avoid some of their own biases? Could the technology really change a student's routine so that they complete courses and their degree?

Noble: I think it also has an unintended effect of making them reliant on those things. What we want are for students to not have to get a prompt, but to actually be able to generate. And that's a really important life skill, also. Maybe I'm old-school about it when it comes to students. I do feel that having the capacity to make a lot of decisions and manage their own academic success and pathways is really important. And to the degree that the learning management systems and other [systems] are kind of shaping their behavior, I think that, in the end, that will probably do them more of a disservice.

When I think about things like bias, I don't think that's something you can automate out of people by kind of giving them a strike or a ding or making them aware in ways that kind of use these technologies. I'm not sure how one polices faculty behavior such that you get those kinds of outcomes you're looking for.

One of the most effective ways to reduce bias and have faculty engage in anti-racist behavior, or in behavior that empowers women and multiple genders in their classroom, is more education.

I guess the question with these systems is, what's their relationship to better educating and keeping a process of lifelong learning alive for teachers and instructors? And students?

Okay, so what should we do?

Quihuis: Safe deployment. You say, "Okay, how do we bring ethical concerns at the conception of the technology?" And, "Since we are gathering all of this data anyway, how do I have engineers looking for patterns that are unusual." The other thing you need is more diversity in the teams. Those design teams have these intellectual, cultural, and social blind spots. So the diversity isn't because of kumbaya, the diversity is to de-risk your product so it isn't used in a manner that you didn't intend.

I think part of it is for institutions, for companies, for customers to ask those questions. And to say, "I need to see an audit trail.” “How do I check if I have a question?” “If there is an algorithmic bias, how can I verify that?" And we need to have evidence that that has been done to some standard, some certification.

You can't be a civil engineer in the state of California without being a registered engineer knowing how to do seismic buildings. We need to have the equivalent of that with these technology platforms. Especially when they touch people.

So Ikea isn't the only one doing this. Persuasive tech is already here, and it's only going to grow. The real question seems to be, how does that growth happen in a healthy and productive way?

Noble: The most important thing that I've learned from my own research is that investing more and more into private companies to provide the backbone and infrastructure for learning and for knowledge is probably a step in the wrong direction.

Someone is designing the technologies that might be persuading us, and those should be transparent. We should understand the ethical frameworks around those technologies. We should understand whether we have an opportunity to opt out. We should be thinking about whether the public can be harmed by those persuasions. And we should really understand them in the short and long term.

Without having that kind of transparency and control, I think we put a lot at stake in these kind of education spaces.

We're at a crucial moment where we might fully embrace certain kinds of projects that we can't easily come back from. So it's a good time for us to be reflective.

What Do Edtech and IKEA Have in Common? Persuasive Design.

Community

What Do Edtech and IKEA Have in Common? Persuasive Design.

By Jeffrey R. Young and Sydney Johnson     Oct 2, 2018

What Do Edtech and IKEA Have in Common? Persuasive Design.

Technology shapes the way we interact everyday. We FaceTime with family across the country, we send snaps to our friends to let them know where we are and what we're doing.

But sometimes we fail to realize that the platforms and data that push us to interact don't always do it in objective ways. Our interactions are increasingly shaped by algorithms, and those codes are designed by humans—people who literally write the script for the ways that tech will make us tick, for better or for worse.

The practice of intentionally guiding user behavior is known as "persuasive technology," and it’s making its way into our phones, our homes and our schools.

This week on the EdSurge On Air podcast, we talk with three experts who study persuasive tech, behavior design, and the ways that algorithms behind technology and search engines can leave damaging effects on society and further exacerbate social inequalities.

This episode has been an experiment in format, and we’d love to hear your feedback. Email Sydney@EdSurge.com or Jeff@EdSurge.com. Listen below, or subscribe to the EdSurge On Air podcast on your favorite podcast app (like iTunes or Stitcher). Highlights from the conversation below have been lightly edited and condensed for clarity.

The best way to think about persuasive technology is to start with the iconic furniture store Ikea. Sandra Burri Gram-Hansen, an assistant professor of communication and psychology at Aalborg University in Denmark, says whenever she wants to kick-start a class discussion on persuasive design, she starts with examples of how that retailer guides its customers.

Burri Gram-Hansen: When you enter IKEA, there's always a big map of the Ikea store. It's usually blue, and it'll have yellow footprints or something on it. But the map is always made in a way that it seems like it's such a short walk; you can get from the entrance to the kitchen just by twenty little dots on the map. But realistically it takes you 40 minutes to get to that place. But they make the complex task feel easier by reducing it and making it easier for us to get there.

They've also got the principle of “tunneling” going on, with all of the lighted arrows on the floors. Of course you could go wandering out on your own, but it's easier to stick with the tunnel and go the way the arrows are pointing.

So they have all of these things going on in the store, and you just go with it. You don't see anyone running in the wrong direction in IKEA.

So that's in a physical store. But the same principles can be applied, and even more powerfully, in the digital realm. One person studying the way data and tech design can influence behavior is Margarita Quihuis, a behavior designer at Stanford University's Peace Innovation Lab, where researchers are looking for ways to harness behavior design in a way that promotes peach. In other words, can data and tech be designed to steer individuals—or our whole society—in a better direction?

Quihuis: The field of behavior design [was] started at Stanford under B.J. Fogg. He's also the father of persuasive technology. So when you think about how technology like cellphones change behavior at scale, this was research that he was doing at Stanford in the 90's. And at that time his PhD thesis, he called it “captology,” because we're captive of technology. And then the name evolved to be persuasive technology.

And then the iPhone came out, and we know how our lives have changed in the last eleven years with the advent of the iPhone. We can see how people have adopted behaviors that weren't possible before. Anywhere from how our attention has changed with technology to the risk of wearables.

Well if you can do that with inane behaviors, with silly behaviors, could we also do that in a way that would promote peace. So Mark Nelson and I started down of this path of saying, "Well, what is peace in this age—this digital age?" And we determined that it was really about how good can we be to each other, and how good can we be to each other through mediated technology. And it could be some sort of difference boundary whether it's gender, or race, or nationality, or language, or anything. Because we live so much in a digital world, the way we design the software and how we design the technology, all of a sudden has an impact on how we interact with each other.

That has optimists feeling hopeful about technology in the future. But plenty of others are concerned about what or who gets to design "good." Safiya Noble is an assistant professor at the University of Southern California and author of the book, Algorithms of Oppression. She studies the way technology and algorithms can worsen social inequalities and prejudices.

Noble: I am interested in things like who controls the narratives that have influence in our society, particularity with large digital media platforms. And it's for this reason that I was tracing what was happening with key issues of representation, ideas of fairness around communities and found what I felt were harmful ways that these technologies influence ideas about people.

In the realm of persuasive technology, it's interesting, because I think that the public is not particularly aware that they are being persuaded. And of course the news of Cambridge Analytica using people's psychographic data, I think that came as a huge shock to most people.

In many ways I think that is business as usual. I mean, Cambridge Analytica is just one of many firms that are interested in persuading people to buy a product or be interested in trying a new service. The traditional model of advertising is really what undergirds most of the digital technologies that we are engaging with.

In higher ed, we see these sorts of quiet cues all the time. They mostly take the form of nudges, those text messages or emails sent to students reminding them about important upcoming deadlines or financial aid or just to check in on their grades to see if they're dipping and flag them if they are. But Sandra Burri Gram-Hansen, the professor in Denmark we spoke to, argues that nudging doesn't necessarily make a lasting impact once those nudges have stopped.

Burri Gram-Hansen: I think the main difference is that persuasive technologies have focused on continuous behavior change, while nudging has a stronger focus on momentary behavior change. And it's the moment you are confronted with the nudge then you are nudged into doing something specific. People are nudged into little things like, for instance, waiting in line the right way in the Ikea restaurants. But the thing is, because they don't process what they're doing, that doesn't mean that they're all of a sudden really good and well behaved while waiting in line to get on the bus or catch a train. Because they don't process the activity they're actually doing it, they just do it.

Noble, the professor at USC, goes one step further, arguing that in education, nudges aren't only influencing student behavior, these platforms and technologies are working two ways. They shape the perceptions and behaviors of staff and faculty as well.

Noble: We know that, for example, admissions officers and offices on university campuses are engaging with large-scale databases that help optimize decision making for them. And I think that we can think of that as kind of an analysis of a lot of different kind of factors that students report out on about their activities and grades and test scores and who they are and keywords they might be using in their essays—and then delivering a set of recommendations or ideal candidates to universities.

This is where I think of really troublesome ways of thinking about admitting students to a university. Because, of course, there are plenty of people who should go to college and who are brilliant and who may not have the right “key words” so to speak in their backgrounds. That can't be optimized for a database looking and trying to do analytics on their profiles. So these kinds of profiling systems that we see in higher ed definitely require a lot of more study. They are certainly persuading the decisions around admissions.

Proponents of persuasive design think the better solution is to teach users about how technology influences them.

Burri Gram-Hansen: Don't get me wrong, it's not about teaching everyone how to be able to design persuasive technologies. But it is making sure that people—especially younger people—understand the principles of what these different systems are doing. Because if they understand the principles of persuasive systems, then that also means that they are able to reject unwanted influence. And that pretty much means that they are still in control.

But what about those potential good applications? Could there be a way that behavior design could push a faculty member to avoid some of their own biases? Could the technology really change a student's routine so that they complete courses and their degree?

Noble: I think it also has an unintended effect of making them reliant on those things. What we want are for students to not have to get a prompt, but to actually be able to generate. And that's a really important life skill, also. Maybe I'm old-school about it when it comes to students. I do feel that having the capacity to make a lot of decisions and manage their own academic success and pathways is really important. And to the degree that the learning management systems and other [systems] are kind of shaping their behavior, I think that, in the end, that will probably do them more of a disservice.

When I think about things like bias, I don't think that's something you can automate out of people by kind of giving them a strike or a ding or making them aware in ways that kind of use these technologies. I'm not sure how one polices faculty behavior such that you get those kinds of outcomes you're looking for.

One of the most effective ways to reduce bias and have faculty engage in anti-racist behavior, or in behavior that empowers women and multiple genders in their classroom, is more education.

I guess the question with these systems is, what's their relationship to better educating and keeping a process of lifelong learning alive for teachers and instructors? And students?

Okay, so what should we do?

Quihuis: Safe deployment. You say, "Okay, how do we bring ethical concerns at the conception of the technology?" And, "Since we are gathering all of this data anyway, how do I have engineers looking for patterns that are unusual." The other thing you need is more diversity in the teams. Those design teams have these intellectual, cultural, and social blind spots. So the diversity isn't because of kumbaya, the diversity is to de-risk your product so it isn't used in a manner that you didn't intend.

I think part of it is for institutions, for companies, for customers to ask those questions. And to say, "I need to see an audit trail.” “How do I check if I have a question?” “If there is an algorithmic bias, how can I verify that?" And we need to have evidence that that has been done to some standard, some certification.

You can't be a civil engineer in the state of California without being a registered engineer knowing how to do seismic buildings. We need to have the equivalent of that with these technology platforms. Especially when they touch people.

So Ikea isn't the only one doing this. Persuasive tech is already here, and it's only going to grow. The real question seems to be, how does that growth happen in a healthy and productive way?

Noble: The most important thing that I've learned from my own research is that investing more and more into private companies to provide the backbone and infrastructure for learning and for knowledge is probably a step in the wrong direction.

Someone is designing the technologies that might be persuading us, and those should be transparent. We should understand the ethical frameworks around those technologies. We should understand whether we have an opportunity to opt out. We should be thinking about whether the public can be harmed by those persuasions. And we should really understand them in the short and long term.

Without having that kind of transparency and control, I think we put a lot at stake in these kind of education spaces.

We're at a crucial moment where we might fully embrace certain kinds of projects that we can't easily come back from. So it's a good time for us to be reflective.

Next In Community

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up