How Education Is Becoming the Front Lines for Debating the Role of...

EdSurge Podcast

How Education Is Becoming the Front Lines for Debating the Role of Algorithms

By Jeffrey R. Young     Mar 10, 2020

How Education Is Becoming the Front Lines for Debating the Role of Algorithms

This article is part of the collection: The EdSurge Podcast.

Lively debates are breaking out these days about algorithms and how they should be used in education.

Among them are concerns over what happens to data in learning management systems like Canvas, to questions over whether campuses should ban facial-recognition software.

But how much “algorithmic literacy” do most students have, and are college professors preparing students for a world increasingly influenced by artificial intelligence?

We tackle those questions on this week’s EdSurge Podcast, which was originally scheduled as a live session at SXSW EDU. When the event was canceled, we asked the panelists to do the session remotely instead.

Our guests include:

  • Barbara Fister, a longtime academic librarian who is also serving as a scholar in residence at Project Information Literacy, where she just worked on a report called “Information Literacy in the Age of Algorithms.”
  • Sarah Ogunmuyiwa, an undergraduate at the University of Texas at Austin double majoring in women and gender studies and philosophy, who is researching the intersections of surveillance, beauty and desire.
  • Bryan Short, a program director at British Columbia’s Freedom of Information and Privacy Association in Canada. A few years ago, when he was a student at the University of British Columbia, he was an activist on privacy issues and the use of algorithms on campus.

Listen to this week’s podcast on Apple Podcasts, Overcast, Spotify, Stitcher, Google Play Music, or wherever you listen to podcasts, or use the player below. Or read the partial transcript, which has been lightly edited for clarity.

EdSurge: Barbara, your group, Project Information Literacy, recently published a report where you sat down with professors and students to talk about algorithms and how they’re impacting our lives. How did you go about the study?

Fister: We chose eight institutions [and looked at a representative sample] of student demographics, different types of institutions, community colleges up through [major research institutions]—all geographically varied. We talked to 103 students … and then we interviewed 37 faculty at these institutions as well. Then we processed the transcripts and looked for patterns—we did some coding and came up with some takeaways from what we heard.


add caption here

This week’s podcast is brought to you by “The State of Online STEM Education'' an upcoming national survey of the online STEM landscape created by Every Learner Everywhere and Online Learning Consortium.

Share your experience in these areas; Effectiveness, Opportunities, Challenges, Equity & Access.

Please sign up and take the survey at http://studyinput.com/


How aware were students about what role algorithms play as they can navigate the world of information these days?

Fister: It was quite high. All of them seem to be very aware of what was going on, and I think it’s because advertising has tipped the hands of these tech companies. You can see ads, which become creepy, following them around the internet across devices and platforms, and so that’s how they became aware of it, and are kind of disturbed by it. They were really indignant about the ways that their privacy was being highlighted, and the ways that they were being typecast through this process of gathering data on them.

They were also resigned to it. They didn’t think they had any choice, and they didn’t think they had any way of making these companies change the way they operate.

I think there’s some space in between the indignation and the resignation, where there could be some real interesting conversations about: “So, we don’t like it. What are we going to do about it? What are the mechanisms in society for doing something about it?”

That’s left me hopeful that the resignation piece could be somewhat turned around, into something more of an activist approach like:, “What are you people doing with my data? I don’t want you to do those things.” And [the feeling that] I can actually have some effect on what these companies are doing if we work together on some social solutions

What about the faculty you spoke with? What were their views on these same issues?

Fister: That was really fascinating to me. Students were concerned, and they were aware, and they [adopted] certain kinds of privacy practices. They were learning from each other how to use VPNs, ad blockers, and a lot of other things.

The faculty were absolutely horrified by what’s going on in the world of information. A lot of them really kind of let loose, like, “Oh, this is a crisis, this is really bad. We need to do something about it” [or] “Tis is making it hard to know what’s true, and what’s not true.” They weren’t relating that to their own work with students.

When asked, “So what do you do in your courses about this? You seem to be really concerned about it,” [faculty would say,] “Oh, well, I don’t know. I hadn’t thought about that. You know, we bring a librarian in to talk to the class.” Most of them really didn’t have any idea how they could talk about it, and I don’t think they felt necessarily qualified to talk about it.

So for the students it was a little bit more personal: “I don’t like this, I’m going to do something about it.” And for the faculty it was like, “I don’t like this. Somebody should do something about it.”

Let’s talk about facial recognition on campus. Sarah, what is it about facial recognition technology that is raising some concerns?

Ogunmuyiwa: What’s raising concerns is that a lot of people aren’t aware that their faces are being registered. A lot of people are not aware that this technology is being used on them.

Another issue with facial recognition technology is that it is based off of an algorithm, and it’s based off of an idea of what like the model human would be. [The default is] being cisgender, or being a white male. It’s based off of that model, and not everybody falls into that. The way facial recognition technology works is that sometimes it doesn’t register everyone, which could be either good, or a bad thing.

For example, people that have darker skin may not be able to use it as easily, because they’re not as easily detected by such software. It raises issues of, “Well, if these models are based off of an arbitrary idea of what a human being looks like, then how is everyone going to be registered under the same system? Is that a good thing or a bad thing?”

I think facial recognition technology is being rolled out in a way where it’s being seen as something that’s neutral. And technology is never neutral, because technology is modeled after human behavior, and human behavior is not neutral.

An example of facial recognition technology would be using your face to unlock your iPhone. A lot of people aren’t really suspicious of that. They just see it as, “Oh, this is a really cool new technology. I don’t even need to use my hands. I can just use my face to unlock my phone.” But, I feel like people should be wary of these things because, is there information being collected through these devices, through these algorithms? And if information is being collected, is there transparency about the information being collected, and what it’s going to be used for? Also, it raises issues of consent as well.

You’re part of this nationwide effort to ban facial recognition on campus, or stop facial recognition from coming to campuses. Right?

Ogunmuyiwa: Yeah. I’m very wary about facial recognition technology being on college campuses because a lot of college campuses already use security cameras. I know UT [Austin] has a lot—probably thousands of security cameras around campus. I was doing a little research on this [and I learned that] the footage from the security cameras is not open access. You can’t request it through FOIA. It’s exempt from that.

Since there’s such a lack of transparency when it comes to security cameras on campus, it has me worried about, “If they were to bring face recognition technology, what information do we have access to?” Also, many students didn’t consent to these things. So I feel it’s a concern. I feel more people should be talking about it.

It sounds like there are people who are thinking through ways to kind of subvert facial recognition technology. One involves even makeup. Could you talk a little bit about this?

Ogunmuyiwa: Yeah. The technique was started by Adam Harvey, who is an artist. He also created something called an anti-drone burqa where he weaved metal within the fabric, and it’s supposed to help you go undetected to drone technology.

He also was thinking about ways to go undetected by facial recognition technology. It turns out that the way the technology works is that it registers different parts of the face. Maybe underneath your eyes, your forehead, your nose, chin. He found a way to go undetected by that technology by covering up those places and kind of anesthetizing it.

The technique is called CV Dazzle. It’s adopted after this technique used in World War II where they would paint black and white stripes on the sides of warships and that would help the ships go undetected. He adopted that and made it into a makeup technique. It’s really cool and I think it’s a really good way of exploring facial recognition technology in an aesthetic way.

The CV Dazzle makeup approach by artist Adam Harvey was originally developed as a master's thesis project while he was at New York University. It is meant to subvert facial-recognition software.

What I’ve been working on with my friend and thinking about is like how we can use makeup, beauty, aesthetics to resist facial recognition technology, but in a way that doesn’t make you stand out in real life as well.

Brian, back when you were a student at the University of British Columbia, in 2016, you pressed your university to release all the data they tracked on you as a student using the course-management system, as a way to bring awareness to student data privacy.

Short: I underwent that very formal legal process to try to get my data. It ended up being a little bit of a fight; it didn’t come easily. It took several months, and by the time I got it I realized like, “Wow! They’re collecting pretty much everything that they can.” There was some rhetoric around why it was being collected, how they were going to use it. And when I began to sort of dig into that rhetoric and say, “Is this really how it’s being used? Could it be being used in different ways?” I was pretty concerned by what I found.

Did you feel like it was collecting and storing more than it needed to as an institution?

Short: Yeah. And, the way that they justify this was to say, “With all of this information being collected, we can now identify students who are struggling within a course, and offer them interventions and try to help them. Allow them to succeed in the course.” And I thought, “Okay, if that’s happening, fantastic. That’s a totally worthwhile thing to do.”

But then I asked for any kind of evidence, like “Can you give me an example of a student where this has worked?” The administration sort of said, “Well actually, it doesn’t work that well. We’ve never actually been able to do that before, but that’s kind of the idea.” As I began to consider and look at all of the information that was being collected, I said, “Well, you could be harming students in this way, or in that way.”

What did you see as a potential harm here?

Short: I’ll talk a little bit about what the data was. It’s whenever you log into this portal, there are timestamps recorded, and how long you spend on each page, and where you click, and where you go. All of this is logged into something that was called the “Performance Dashboard,” which created a very easy way for instructors to sort of rank and look at how engaged a student was in the portal. This in turn would inform participation grades, especially in online courses.

A student who logged in, spent a bunch of time in the system but maybe wasn’t actually doing anything necessarily productive [might get credit for participation]. They could have just had the window open in the background, and would appear to be more engaged than a student who just logged in once at the beginning of the year, downloaded everything, and then logged in periodically just to submit assignments or submit a comment. In that sense a student could be biased depending on how they were using the system.

These days people are talking more about data in learning management systems, and have concerns over the sale of Instructure, which makes the Canvas course management system.

Short: It’s kind of almost a worst-case scenario situation where you have an agreement as a public institution with one company, and you’ve got a contract with them for the way that they’re going to treat and use this information. Then when the company might be bought ,[someone else could use] the data for perhaps something else entirely. What does that mean for privacy?

Listen to the full discussion on the podcast.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

The EdSurge Podcast

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up