‘Weapons of Math Destruction’: A Data Scientist’s Warning About Impacts...

Student Success

‘Weapons of Math Destruction’: A Data Scientist’s Warning About Impacts of Big Data

By Jeffrey R. Young     Aug 22, 2017

‘Weapons of Math Destruction’: A Data Scientist’s Warning About Impacts of Big Data

This article is part of the guide: Crossing the Finish Line: Stories on Student Success and What Colleges Are Doing to Get There.

These days algorithms have taken on an almost godlike power—they’re up in the (data) clouds, watching everything, passing judgment and leaving us mere mortals with no way to appeal or to even know when these mathematical deities have intervened.

That’s the argument made by Cathy O'Neil in her book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.” If algorithms are gods, she’s one of the high priests, as a data scientist and mathematician.

These days O'Neil is trying to challenge this divine narrative of Big Data and point out how fallible the mathematical frameworks around us are—whether in financial systems, in social networks or in education. As she writes, “many of these models encoded human prejudice, misunderstanding and bias into the software systems that increasingly manage our lives.”

EdSurge connected with O'Neil to hear how her behind-the-scenes view of the 2008 financial crisis led her to try to push for tools that can audit Facebook, Google, and other algorithm-fueled systems now asserting themselves in human affairs.

The conversation has been edited and condensed for clarity. You can listen to a complete version below, or on iTunes (or your favorite podcast app).

EdSurge: What led you to leave that world of finance and decide that there’s a dark side to Big Data?

O’Neil: When I turned 15, I decided I was going to become a math professor—I was at a math camp at the time, and it just seemed like the most glamorous thing I could imagine. And when I was 35, there I was: I was a Barnard College math professor, in a combined math department with Columbia University, living in New York City, and I was like, "I made it, and this is awesome.” But I wasn't particularly psyched with the pace of the job. Nor was I particularly psyched with the actual department I was in. So I just jumped out, and I started working immediately at D.E. Shaw, which is a hedge fund, and I started in June 2007, so right before the credit crisis started breaking.

It was a pretty fancy hedge fund. I worked with Larry Summers there. So we got a pretty amazing view of the financial crisis from the inside. In particular, we became aware, I think well before the average person and the rest of the public, of just how important the mortgage-backed securities were—in particular, the triple-A-rated mortgage-backed securities, and how they had been mislabeled. The more I learned about that, the sicker to the stomach I became. I just felt like, "Wait, this is mathematics, and it's being used to deceive. It's like a mathematical deception. A lie." Moreover, it was really, really destructive. Investors from all over the world believed that these mortgage-backed securities were really safe, and they bought into them—much to their detriment later.

I was honestly pretty ashamed of that, although I should mention that my hedge fund had nothing to do with those ratings, that was the credit-rating agencies. I was still quite naïve in thinking essentially that I could use mathematics to fix bad mathematics. I wanted to go and fix risk, the way we understood risk, so I joined a firm called Risk Metrics, which did risk analysis for almost all the big banks and hedge funds. My idea was, we're going to reimagine the risk of these instruments that had gone crazy during the crisis. Credit default swaps in particular. But soon after that, I realized that no one cared. I came up with something that would expose more risk, and people were like, "We don't want to know about more risk. We already have too much risk on our books."

That's when I was like, "Wait, this isn't a math problem. This is a political problem." The fact that the banks were bailed out and they didn't really have to deal with the damage they'd done. Nobody went to jail. I really became disillusioned, and I also understood that mathematics alone can't solve big, big problems that are actually political in nature. It requires the political will as well. I actually left finance wanting to get away from it as far as possible, because I didn't think I could help.

You note that a lot of “poisonous assumptions” are camouflaged by math in some of these algorithms—whether those are in advertising, or prisons, or education. Why do you think that happens so often?

For example, with the original risk model I was using (at Risk Metrics), was intended to encapsulate risk with one number. But as soon as you're doing anything with one number, you're, by necessity, dumbing down a lot. There were just a bunch of assumptions that were better than nothing at the time, but ended up being something that was easily gamed so that people could hide risk. As soon as you make a metric that's important in people's lives, they're going to learn to game it, and then they pervert it beyond all original meaning. That's what you see very often in the world of algorithms.

Algorithms, at the end of the day, are typically scoring systems. As soon as you have a scoring system, then you can game the scoring system. If you game it enough, it'll stop making sense. That's essentially what happened.

I'm curious about a higher education example you talk about in your book. The US News and World Report’s college rankings, which a lot of people in the higher ed world have complained about for a while, still looms large. I guess, what are your issues with the rankings, and what are your recommendations for improving or replacing them?

The US News and World Report college ranking algorithm is really old and really horrible. It's horrible because it's super gameable, and college administrators have perverted the concept of college itself, I would argue, in order to boost their ranking in that list. I blame parents, too, because they are the ones that care so much about the ranking of their children's school. It has just been given this absolutely outsized power by society. And the worst part of it is it actually doesn't care about one of the most important things: Namely, price. The result has been that administrators, in their attempts to boost their rankings, have ignored tuition costs. Therefore, they've done really expensive things that has raised tuition for people without actually improving education.

What I'd like to see done about it is for people to stop looking at lists.

We're at a political and cultural moment when people are thinking a lot about their filter bubbles, and how they're getting information—and sometimes misinformation—online. As somebody who has a deep understanding of these algorithms and how they work, are you optimistic at all that we can find a way to create social networks and online platforms that promote a more healthy information diet?

No, I wouldn't call myself optimistic. I would say that there are healthy steps that are being taken by those social-media giants. Back when Zuckerberg was claiming that fake news wasn't a big problem, that was an excruciating moment. I think things have gotten slightly better from that, but it's certainly nowhere near where it needs to be. The way I look at it is that Facebook optimizes on profit. Facebook optimizes on the so-called engagement, which basically means how much time we spend on Facebook. It just wants us to stay there forever, because the longer we stay there the more clicks we click on their ads, and that's how they make money, so they're optimizing to their bottom line, which is profit.

And they've essentially decimated journalism as an industry. Yet they refuse to actually hire journalists to edit their content, or even to fact check. In the meantime, they're making billions of dollars every presidential cycle sending us propaganda. The truth is that Facebook is a propaganda machine. Never before have candidates been able to tailor their message so acutely to the different contingencies in the voting strata. It is frankly beyond the pale, in the sense that we have no idea what the messages are being sent to various groups.

At the waning hours of the Trump campaign, one of the campaign managers bragged that they'd sent some Facebook ads as a voter-suppression tactic to African-Americans. Which is to say, they were sending ads to convince African-Americans not to vote at all. It's the opposite idea of the get out the vote campaign, which we've always had. Get out the vote is partisan, in the sense that Democrats try to get the Democratic vote out, and Republicans try to get the Republican vote out. Now we will never again make the mistake of accidentally trying to get a Republican vote out for a Democratic campaign, because we know everybody's side, because that's what data-warehousing and profiling does. We know everybody's affiliation, but moreover we're actually going to be sending voter-suppression ads to the other side. And in contrast to laws about TV commercials, where they have to say, "This was sponsored by so-and-so candidate for Governor,” we don't have that online. It can be done with dark money, and it is being done. It's really disgraceful.

Again, it goes back to your conclusion in your finance world, where you were trying to fix the problem with more math. Is this fixable with more math, or is this a social thing, or both?

This is absolutely a social thing. I would argue that we do need more math. Because what I want to do is I want to build auditing algorithms that audit algorithms. I would refer to this as a tool to understand what these black-box algorithms are doing, what they're doing to us, what they're doing to democracy. We need these new tools, but the most important thing is we need to get Facebook to show us what the ads are. What are you showing people? If you're really sending propaganda ads on behalf of the Trump campaign, to get people not to vote, what exactly does that ad look like? I think we deserve to know.

Have you looked much at personalized learning algorithms?

Yes, the edtech stuff. I should say that I don't want to be living in the stone ages. I want us to use good tools that work for kids. If lessons on tablets really work well, let's use them. But what I worry about is this idea that we are surveilling our children from the get-go, and that we'll have a very, very long-term record on their abilities and their grit. This is a new thing that they're being measured on, it's something called persistence, their persistence scores.

I think about my own son, who's a senior in high school now. He's a really smart guy, but he couldn't read until third grade. I wonder what his scores would look like. He wasn't looking good at the end of second grade to the world of education. I we should protect our children. We should think about what could go wrong here. A lot of these edech algorithms don't seem to think about things like learning disabilities or cultural differences. They seem to be a one-size-fits-all type thing, in spite of the fact that they're saying it is like learning, dynamic stuff.

And they say they are personalized.

I think they have a very narrow definition of personalized, and they have a very narrow definition of what it means to be successful. I just feel like the last remaining realm for being able to mess up and not being kicked for it is childhood, and we're taking that away. Or we could be. Again, I don't want to say, "Let's not use this," if it's great, but let me put it this way. I want to see evidence that it works before we roll it out on a large school.

I haven't seen any place where these edtech companies are describing what their evidence is. What they mean that it works. How are they keeping track of things not going wrong. It's just as if they're being trusted implicitly by the school systems in which they work. That they must be doing good because they're technology. It's exactly that same lack of skepticism I saw with big data with the finance crash.

You propose a Hippocratic oath of sorts for data scientists. What would it mean to apply a Hippocratic oath for data science?

I was given an outrageous amount of power because I was a math PhD when I was working as a data scientist. They were like, "Well, you have a PhD, so you know what you're doing." But the questions I was supposed to be answering were ethical questions—they weren't math questions. Some of the things I was doing were pretty low stakes, but at some point I was working for the city of New York and trying to understand which homeless families will stay in the system for a long time. I was trying to decide, "Oh, should I use race as an attribute?" Then I realized, "Well, I guess it all depends on what the consequence of this will be. If a family is deemed high risk of staying long, are they going to be given more support, or are they going to give them less support?" If you think through it, you realize that you're sort of like the de facto ethicist at the center of this algorithm. It's like, "I'm not trained to be an ethicist. I don't even know how to think about this correctly." I realized that I was going through these ethical quandaries, but that most data scientists around me had not even done that. Why? Because they were computer science majors, or they were math majors, or they were engineering majors. They never discussed concepts of ethics.

That's one thing I think we absolutely need for anybody who might be working in a Facebook, or a Google or a city hall, where their algorithms will decide on people's fates. They have to think about ethics. But having said that, I don't want them to be the final word on ethics. I still think our specialty is not ethics. We have to know enough to know when we're making ethical decisions, but my ideal data scientist would actually consider themselves to be a translator. Then the data scientist's job would be to translate those decisions into code, because that's what we're good at.

Now, on the question of whether algorithms should be regulated, I am calling for that. I'm calling for algorithms of a certain level of importance, that are scaled and that decide important things for people. Like whether they get a credit card, or how much they pay for insurance, or whether they get a job, or how long they go to prison. Or how they're assessed at their job. Those algorithms should be held to high, high standards.

Learn more about EdSurge operations, ethics and policies here. Learn more about EdSurge supporters here.

Next Up

Crossing the Finish Line: Stories on Student Success and What Colleges Are Doing to Get There

More from EdSurge

Get our email newsletterSign me up
Keep up to date with our email newsletterSign me up