May 18, 2021
It’s no secret that we rely on technology in our daily lives. So it shouldn’t be a secret how that technology works, who is developing it, and just how biased it can be. This week on Getting Curious, returning guest Meredith Broussard joins Jonathan to discuss how data rights are civil rights—and how we can all get involved in efforts to defend them.
Data journalist Meredith Broussard is an associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of “Artificial Unintelligence: How Computers Misunderstand the World.” Meredith is featured in the documentary Coded Bias, directed by Shalini Kantayya, now available to stream on Netflix.
You can follow Meredith on Twitter and Instagram @merbroussard, and keep up with her work at meredithbroussard.com. For more information on Coded Bias, head over to CodedBias.com and follow @codedbias on Instagram and Twitter.
Transcripts for each episode are available at JonathanVanNess.com.
Check out Getting Curious merch at PodSwag.com.
Listen to more music from Quiñ by heading over to TheQuinCat.com.
213 — We’re Being Watched. Now What? with Meredith Broussard
Getting Curious with Jonathan Van Ness & Meredith Broussard
JVN [00:00:00] Welcome to Getting Curious. I’m Jonathan Van Ness and every week I sit down for a 40 minute conversation. Let’s be honest, it’s really, usually always longer than 40 minutes, I’m sorry, it used to be 40 and now I can’t stop asking questions, I don’t know what happened. But anyway, each week, we sit down with a brilliant expert where I get to ask them all about something that makes me curious. On today’s episode, I’m joined by Professor Meredith Broussad, where I ask her: We’re being watched. Now what? Our guest this week on Getting Curious is-, it’s our second time together. We are obsessed. We learned so much we can’t stand it. You are definitely someone who when I met you, I smashed the follow button I have been following ever since. You also just made a gorgeous, if I do say so myself, debut on Netflix. You are Meredith Broussard, a data journalist. You are also an assistant professor at the Arthur L. Carter Journalism Institute of New York University. You also have been newly featured on the documentary ‘Coded Bias,’ which is a major. It explores just how biased algorithms can be and the dangers of relying on them too much. Meredith, how are you? Welcome back.
MEREDITH BROUSSARD [00:01:08] I am really excited to be here. Thank you for the invitation.
JVN [00:01:12] I do want to kind of just quickly go through to get people up to speed just in case they didn’t listen to our first episode. So your book, Artificial Unintelligence. You taught me from, I think you talk about in that book, it’s the technochauvinism, honey. I talk about this technochauvinism a lot. Every time I look at my blinds, I think, “I didn’t choose technochauvinism,” because I just got the good old fashioned. And that was because of you. I was like, “I’m not getting these automatic, chauvinist blinds.” Yeah, I’m not.
MEREDITH BROUSSARD [00:01:42] Oh, I’m so glad! Because those things would break.
JVN [00:01:46] They would break. And I’m not doing it. So but just for people that may have not listened to our first episode or maybe they haven’t read Artificial Unintelligence, what is an algorithm? What is artificial intelligence?
MEREDITH BROUSSARD [00:01:58] So there are a lot of misconceptions out there about artificial intelligence. Artificial intelligence is just math. Now, we often get really confused with all of the Hollywood ideas about AI, but what’s real about AI is that it’s math. And we talk about algorithms in connection with AI because an algorithm is a set of instructions or a set of steps that a computer follows in order to get the desired result. So an algorithm governs what shows up in your social feed, for example. That’s a pretty benign use of an algorithm, but then there are really bad uses of algorithms, like when algorithms are used in facial recognition in policing and people can be wrongfully accused, wrongfully arrested because decision-making has been turned over to algorithms. And that’s obviously terrible. And the pro-technology bias is something I call technochauvinism. So technochauvinism is the idea that computers are superior to humans. And instead what we should be thinking is, “What is the right tool for the task?” Because sometimes the right tool for the task is a computer. And sometimes it’s something simple, like a book in the hands of a child sitting on its parent’s lap. It’s not a competition.
JVN [00:03:23] I think people can be naturally a little bit more leery of artificial intelligence because it sounds scarier? When we think about an algorithm, it sounds just kind of like you said, it does sound, like, a little bit more benign, even though some can be, like, more, like, malevolent or whatever that word is. But some of them, you know, it’s just, “That’s just how your, your YouTube is going to pull you up what YouTube video you should watch next, or what Facebook thing is, you know, you’re going to see next.” It doesn’t seem like it’s that big of a deal. But why do we think that algorithms are, like, a neutral thing?
MEREDITH BROUSSARD [00:03:56] Oh, that’s a really good question. I think it goes back to the roots of techno chauvinism, but it also goes back to marketing. Right. So when I when Facebook and YouTube and what have you started? Nobody really talked about how the information got into the feed. And then when people started getting curious about: “How did that information get there?” “Why am I seeing things from some friends and not from other friends?” I, you know, the tech companies responded by explaining and the explanation is algorithms. One of the ways we started talking about algorithms is actually in the context of social justice.
So a few years ago, there was a, an investigation by ProPublica called ‘Machine Bias.’ It was by Julia Angwin, who now runs a news organization called The Markup, and they’re doing amazing work in algorithmic accountability. So holding those accountable who are using algorithms to make decisions on people’s behalf. But that first investigation, the machine biased investigation, showed how a very popular algorithm called a recidivism algorithm claimed to predict who was going to be a repeat offender after they were arrested. And it turns out that this algorithm was biased against Black people. And mathematically, there was no way for this algorithm to treat Black people and white people fairly.
JVN [00:05:25] Who was using this algorithm?
MEREDITH BROUSSARD [00:05:27] So it was generally used by law enforcement. So judges would get the score. So somebody would be arrested and they would take this quiz and it would generate a score. And then the judge could use the score in deciding if somebody gets bail or what kind of sentence somebody gets.
JVN [00:05:46] What! Was this federal? It’s, like, something that like anyone that gets arrested in the country, they take a quiz.
MEREDITH BROUSSARD [00:05:53] Well, it-, it depends on the municipality. We can’t say, oh, this is happening everywhere because it depends on the individual contracts that individual police departments have with individual vendors, blah, blah, blah.
JVN [00:06:05] Who are these vendors that sell these quizzes?
MEREDITH BROUSSARD [00:06:08] It’s horrifying, isn’t it? So we really need to be conscious of the way that algorithms are used in policing and we need to be conscious of the way that algorithms discriminate by default. And if we’re going to use algorithms in policing, then those algorithms are going to be biased against people of color, against poor people. And one of the lessons from the film ‘Coded Bias’ is that we shouldn’t be using algorithms in policing.
JVN [00:06:44] So, many municipalities have this ability to rate people that get arrested. And by the way, like, just because you’ve been arrested doesn’t mean that you’ve been convicted of something. And there is this whole thing in, like, the United States about, like, you know, “You’ve got to be, like, assumed innocent until proven guilty.” So it is interesting how a way in which, that, like, from the onset, if you’re already being graded on your chances of potentially re-, like, you know, committing a crime, again, you’re already being assumed to be guilty. You’re already being assumed to be, like, a number in the system. So that is a really big deal. So back to that question before I interrupted you 17 million times, why people think that these algorithms are neutral is because of marketing. It was because people from these companies said, like, “Oh, it’s an algorithm.” And then and then what happened?
MEREDITH BROUSSARD [00:07:38] Exactly. Exactly. People originally thought, “Oh, well, algorithms are just math and math is more unbiased and more objective than decisions made by humans.” And you know who told us this? You know who popularized this notion? Mathematicians! Who have this-, mathematicians are really great, but a lot of them have a kind of snobbishness, a sense of superiority about mathematics. There’s this idea that math is separated from the real world, is somehow on a higher plane or more holy. And that transferred over to algorithms which after all, are mathematical processes because all a computer is doing is math. Right. It computes. That’s, it’s in the name. Right.
So a computer is a machine that’s doing math. And when we use mathematical machines in order to make social decisions, we’ve run into problems. So one of the ways that I-, that I like to talk about it is I like to talk about fairness as a machine understands it and fairness as a human understands it. And this starts with a cookie. Right. So when I was little and there would be one cookie left in the house in the cookie jar, my little brother and I would fight over who got the cookie. And if a computer was going to solve this problem, the computer would say, “OK, well, each child gets 50 percent of the cookie and that’s going to be fair,” which is absolutely true. That’s mathematically fair.
But in the real world, when you break a cookie in half, there’s a big half and there’s a little half. And then my brother and I would fight over who got the big half and the little half. And so if I wanted the big half, I would say to my brother, “OK, you let me have the big half now and I will let you pick the TV show that we watch together after dinner.” My brother would say, “OK, that sounds fair.” And it was, it was a socially fair decision. And so mathematical fairness and social fairness are not the same thing. They don’t have to be because they’re, they’re just different categories of things in the world. And often we want things that are socially fair, that are not necessarily mathematically fair. And this is why we run into so many problems when we start using computers to make social decisions. Computers are just not the appropriate tool for everything.
JVN [00:10:16] Also, because too, like, in that, in that example, like, maybe you’re, like, maybe you, like, or maybe your brother, like, kicked you in the shin, like, two hours before. So you actually get the whole cookie because like, you know, like, it doesn’t have context, like an algorithm can’t give context on history and, like, what kind of led to that very point. I just want to broaden people’s minds to the idea that these algorithms and these, even this idea that you’re, that you’re already ingrained with them. They’re in a lot of different areas in our life that we may not think about. Can you tell us about some of the other areas that we’re interacting with these algorithms and maybe don’t even know?
MEREDITH BROUSSARD [00:10:55] Oh, there are so many ways that you’re interacting with algorithms. When you use Google Search, there are something like two hundred and fifty different mathematical models that are activated in order to pull up your search results. Algorithms govern what goes into your social field when you use closed captioning on videos. That’s usually algorithmic transcription. You know, we use, AI-based translation when you’re trying to, say, read tweets in a different language and then behind the scenes, there are all kinds of algorithms that you don’t think about.
One that I wrote about recently for The New York Times was a really terrible decision where people at the International Baccalaureate used algorithms in order to assign real grades to imaginary students. So the International Baccalaureate exams, which are like the AP exams, got canceled because of the pandemic, because they couldn’t hold exams in real life. And they decided, “Alright, well, we couldn’t hold the tests for real. So we’re just going to use an algorithm to predict what grades these students would have gotten had they actually taken the test,” which is such a terrible idea. I mean, you say it out loud and it sounds absurd, but at the time, these, you know, educational bureaucrats thought it was a good idea. And so they used, they used these predictive models in order to predict.
But here’s the thing. The students at the poor schools were predicted to get worse grades than the students at the rich schools. The students who attend schools that are majority, majority POC were predicted to get worse grades than the students at the whiter schools. Because when you look at what the data shows us, yes, America has systematically underfunded its public schools. And, you know, one of the ways that racism and white supremacy works is, you know, the, the schools that are full of kids of color get less money and have worse outcomes than the whiter schools. It’s, it’s about racism. But what the algorithms are doing is they’re taking in the data about what exists and they’re making a mathematical model to say, “Let’s reproduce what we see in the data. Let’s look for the patterns, the hidden patterns in the data and make decisions in the future based on what these patterns are.” And it’s horrific. And what you’re doing is you’re just reproducing existing inequality.
JVN [00:13:57] Were these grades assigned to kids? And then those grades are used for, like, on acceptance into schools and, like, other opportunities?
MEREDITH BROUSSARD [00:14:06] Yes. So the way that this really, really matters for International Baccalaureate is that the IB diploma is something that you can use in order to get college credit. So if you take enough AP exams and you get enough advanced credit, you can actually get something like two years worth of college credit based on your high scores in high school. So I interviewed a young woman in Colorado who had gotten straight A’s all through school, a native Spanish speaker, and the algorithm predicted that she would fail her IB Spanish exam, which is absurd because Spanish is her first language. And she translated an entire novel. She was reading Camus’ novel ‘The Plague’ before the pandemic shut down her school in Spanish, translating it to English. This was an amazing student. And the algorithm said, “Oh, this student goes to a poor school. They’re definitely not going to pass their Spanish exam.”
JVN [00:15:10] And that grade stuck for her?
MEREDITH BROUSSARD [00:15:12] It created so many problems with her-, for her because it meant that she did not get the advanced credit that she deserved.
JVN [00:15:23] And there was no recourse.
MEREDITH BROUSSARD [00:15:25] Well, there were. She actually is somebody who I talked to because she protested, and I was so proud of her. And I, I was just so I was so honored to be able to talk to her about this because she protested and she felt empowered to say, “No, I am talented, I know Spanish. This is not the grade that I should have gotten on the test, based on, you know, input from my teachers, based on my earlier grades.” And she’s one of thousands and thousands of kids who protested their grades. And this actually didn’t just happen in the US. It also happened in the UK. So the A-levels are these things that you, these exams that you take in order to get into college in the UK, and the level people also decided to use an algorithm to assign imaginary grades, and they also totally screwed it up. And all of these kids didn’t get into college because of the bad grades assigned to them by the algorithm. It was horrible.
JVN [00:16:29] But in order for the girl in Colorado to protest, like, wouldn’t you have to have, like, some, like, money and, like, resources to like know who it, like, you have to have at least some social capital or literal capital to, like, even know what to do.
MEREDITH BROUSSARD [00:16:40] Mm hmm. I’m so glad you brought up social capital, because you do need social capital. You need educational capital, and you need to feel empowered, empowered enough to say, “The computer is wrong.” And so a lot of people don’t feel like they can say, “The computer is wrong.” Right, a lot of people think, “Oh, yeah, the computer is, is more unbiased, is more objective. And if the computer says I’m going to fail, then I don’t really have any recourse.” And so, I mean, one of the things I would love for people to get out of the conversation is the idea that they are, they do have power, that they can protest the decisions that algorithms are making. And I think we need more mechanisms for protesting algorithmic decisions in the world.
JVN [00:17:28] Well, one of the things that ‘Coded Bias’ talks about is that it’s really not a question of whether or not we’re being watched, but really how we’re being watched. So, yes, like, we are being observed in all these different ways, but how? And I think that it begs the question, well, who is developing these algorithms in the first place? So the people that are in tech, the people that are, you know, historically becoming cute computer programmers, becoming people who would potentially make these algorithms, do we have data on like who the the demographics of these people typically are?
MEREDITH BROUSSARD [00:18:05] We do have demographics on this. And tech is not a diverse place. It is majority, majority, male. It is majority, majority white. In these, I mean, not only do we need more diversity at the upper echelons of tech, you can also look at who are the, who are the tech gurus who are really famous. Right. You’ve got, what, Bill Gates, you’ve got Mark Zuckerberg.
JVN [00:18:43] Elon Musk. Does he count?
MEREDITH BROUSSARD [00:18:46] You know, he’s going to get a thumbs down for me, but I think he does count.
JVN [00:18:51] Well, I think they all get thumbs down from me. There does seem to be a trend. Like, all white straight guys, it seems like, unless they’re, like, low-key poly or bi.
MEREDITH BROUSSARD [00:18:59] They’re all educated at the same kinds of schools in, you know, in math slash computer science, because computer science, of course, is a descendant of mathematics. And so you have a very homogeneous mindset. And what happens then is that people who are very much the same have collective blind spots. So, for example, one of the reasons that we don’t have a-, or one of the reasons that when you are entering your data into a database and, you know, if you only have ‘male’ and ‘female’ as options for gender, you know, you don’t have ‘non-binary,’ you don’t have ‘gender nonconforming.’ That’s because of the unconscious bias and the, the habits of the people who are making the database. It’s a problem!
JVN [00:19:58] Woah! It is. So, so I think so since 2019, since last time we talked, like, so many things have changed and so many things have just come away. Things have changed. Our life has largely moved on to Zoom indoors. We’re wearing masks everywhere. How have, how has that part of things changed this whole technochauvinism, like, world that we live in, or has it or has it made it worse? It’s made it worse I bet, huh?
MEREDITH BROUSSARD [00:20:33] Well, I’m really interested in the way that since about 2018, 2019, people have awakened to the problem of digital surveillance. So it’s not that there’s more surveillance happening now than there was before. There’s actually pretty much the same level of invasive surveillance happening. It’s just that people are starting to object to it more. And I think this is great. I think we need more objecting to it and we need more questioning of, of using facial recognition technology in policing. We need to question whether people should be building models to, to scan social media, to predict who’s going to be in the next school shooter, because, no, we shouldn’t be doing these things.
JVN [00:21:26] So. OK, so obviously I agree with this, but let’s, so but then I just think about ‘Fox-itis.’ I think about, like, you know, someone watching Fox News all the time and being like, “Well, why wouldn’t we want to predict our next mass shooter, like, that seems like a good idea to me. You know, computers are so smart.” And I’m obviously playing a character here. So don’t think I’m actually this much of a nightmare. “But computers are smart and they are less biased because it’s math. So why don’t we want what I mean, what’s going to go wrong if we look for the next mass shooter on social media?”
MEREDITH BROUSSARD [00:22:06] I mean, I hear you playing devil’s advocate, but it’s just wrong.
JVN [00:22:23] Well, because wouldn’t it, like, wouldn’t it be potentially like, aren’t you going to be, like, most likely getting people involved who weren’t and aren’t? And it would, I’m sure, be racially targeting people like which is it just. It would be, it just wouldn’t work, is the point.
MEREDITH BROUSSARD [00:22:29] The point. So there’s this amazing book by Ruha Benjamin called Race After Technology, and she has this, this framing that one of the ways that we should start thinking about technical systems is that automated decision making systems discriminate by default. So the old way of thinking was that, “Oh, tech is so great, we should be doing, using computers for everything.” But instead, if you change your frame of reference to say, “Alright, when we use Automated Decision-Making systems, they are going to discriminate by default.” That allows you to start seeing what the problems are, right, and what the potential problems are better.
So, for example, let’s take the, the school shooter, you know, scanning social media for school shooters thing, which, by the way, is a bad idea. So who are the kids who get targeted by, by school police? Well, those are the students of color, because who gets targeted by policing in general communities of color. That’s not because there’s more crime happening. It’s because of overpolicing and decades of overpolicing. So an abolitionist stance does not say, “Let’s use social media data to predict more crime among communities of color,” an abolitionist stance says, “Let’s not do this at all.” Do you remember Jeff Goldblum in Jurassic Park where he’s talking about making the dinosaur technology?
JVN [00:24:18] Is that the water, that sexy water scene? Where, like, the water’s thumping? Oh, yeah.
MEREDITH BROUSSARD [00:24:22] I think it’s before that, where he’s like, you know, “We were so excited about saying, ‘Can we do this? Can we rebuild dinosaurs?’ We didn’t stop to think, ‘Should we rebuild dinosaurs?’” It’s just such a good idea to keep in mind when you’re thinking about building technology to replace X social thing.
JVN [00:24:49] And so now I want to, I mean, and I obviously, like, on my Instagram bio, I have: “Defund the police. Fund community.” I’m very much with you. I am very much, like, we have to get, I mean, and for me, I always find myself with, like, nightmare white people in my DMs who are, like, “Well, I hope you never need to call the police.” And I’m like, “Actually, I hope you never look up at, like, how slow they are to respond and, like, how much they actually don’t really need to come protect you.” We touched on it earlier, but it’s, like, there are ways that law enforcement already is, like, really in bed with, with tech and with, with using tech in their day-to-day job. So can you remind us of some of the ways that police and prisons use artificial intelligence outside of that, like, horrific revetted recidivism quiz? Because I know there’s more. Yes.
MEREDITH BROUSSARD [00:25:46] In ‘Coded Bias,’ one of the things we see is how facial recognition algorithms work and how they are being used in policing. So there’s a scene where Big Brother Watch in the UK is kind of going around behind some police who are doing stops based on facial recognition. And Big Brother watches telling people what the police are doing and saying, “Listen, you have rights,” because it’s this it’s a nightmare scenario, right? Maybe the fantasy that there would be cameras everywhere and the cameras would be running facial recognition technology and that would be comparing the facial scans against, you know, say, surveillance footage of crimes being committed and people would be matched up and then the police would be able to just stop people and and find all the criminals and put them away.
I mean, it’s a, it’s a dystopian fantasy. It’s straight out of Hollywood. And when we look at how these things are actually implemented, we see that there are so, so many problems and there’s so much wasted money, so Joy Buolamwini’s work in ‘Coded Bias’ shows us that facial recognition technology is better at recognizing men than women. And it totally excludes trans and non-binary folks. It’s better at recognizing light skin than dark skin. And so her intersectional analysis shows us that these technologies are actually worse or the worst at recognizing darker skinned women. And what happens as a result is when these technologies are used in policing by people with darker skin get mismatched more often.
There’s a very famous case in Michigan that is, that was in ‘The New York Times,’ where Robert Williams was arrested unfairly because a facial recognition program suggested that his driver’s license photo matched the surveillance footage of a shoplifter at a watch store in Detroit. He doesn’t even live in Detroit. He wasn’t anywhere near Detroit when the crime happened. But this bad, you know, sort of facial recognition technology said, “Oh, this is a match.” And the police went over and arrested him. Again, wrongfully. So this is not the first time this happened. There are other cases going through the courts right now where people have been wrongfully arrested, wrongfully accused because of faulty facial recognition technology.
And a lot of people would say, “Alright, well, the way to fix this is to get more pictures of people of color in to the training data that’s used to feed the facial recognition systems. Because, oh, well, if the facial recognition is bad at recognizing people with dark skin. Let’s just put in more pictures of people with dark skin and train the algorithms better.” This is not the answer. And this is why this is one of the things that I hope people take away from ‘Coded Bias.’ The answer is not to do that because facial recognition technology is disproportionately weaponized against communities of color, against poor communities. And the way to fix this, the way to make it more just is to not use these technologies in policing at all.
But one of the amazing things that’s come out of the movie and out of Joy’s work is that the big tech companies have called a halt to developing facial recognition for policing. Some of the big tech companies just put a total, total halt on it, Amazon said they were going to pause for a year. I think that’s up next month. I think they should put a permanent ban on it. And the other amazing thing that has happened recently is that in the EU there is new AI legislation proposed that would regulate these technologies and would say, “You cannot use these technologies unless they are under strict supervision.” So they’re categorized as high risk.
JVN [00:30:17] Well, I think about what’s going on in France right now and, like, the insanely Islamophobic laws that have been passed against Muslim women. And I, you know, a lot of times, as we’ve seen with this law enforcement conversation, it’s like reform versus abolition. And I think because you cannot reform something that was inextricably linked to white supremacy and, like, the killing and imprisonment of Black people. And literally that is what law enforcement was based off of in the United States.
And so you said that Amazon’s moratorium was up next month, which kind of brings me into this: Timnit Gebru. So that was one of the people who was formerly from Google’s ethical, ethical artificial intelligence team. I remember reading articles about, like, this all going on last year. And then Timnit being, like, “I’m out of here!” And it was, like, a whole thing. And but, like, major, we’re obsessed,
So what have been some of the most effective ways in this last year that you were, like, “These heauxs are getting smart, OK,” like, putting this pressure on Amazon and, like, you saying that you really want people to, like, come away with this feeling empowered and knowing that you do have a way to challenge this, you know, this techn chauvinism.
MEREDITH BROUSSARD [00:31:30] OK. So this is the this is the question about, like, what gives me hope, right? OK. I think the thing that gives me hope is looking at individual people who are really making a difference on behalf of the collective good, because for so long, the, the narrative about technology was that, “Oh, we all have to be individual. We all have to be in charge of our individual technology profile. And it’s all on us as individuals.” And it’s just so much of a burden. I mean, for example, like, I do all the tech support for everybody in my house and, like, it is a huge amount of additional work. And I think about this when I think about the idea that somehow individuals would, would be responsible for curating, curating their own feeds and managing the ad algorithms that, that are kind of working to show what’s in our food and what kind of advertising runs next to it. It’s just too big, like no one person can handle all of it all the time.
So we need collective action. We need regulation of tech companies. We need regulation of AI technology so that it’s done for all of us collectively instead of all of us being individually irresponsible because the individual responsibility, the tech companies self-regulating, it has not worked. It has brought us to the disaster of the current moment. So I’m really inspired by people who are working in collectives to solve the problem on a bigger scale. A few people that I, that I will call out, obviously Joy Buolamwini, just an inspiring, inspiring person. All of the other people featured in ‘Coded Bias’ are, are, you know, many of them are friends and they are all inspirations. Safiya Noble’s work, Kathy O’Neil’s work, Zeynep Tufekci, Amy Webb. Yeshimabeit Milner is another activist who is doing amazing work. She runs an organization called Data for Black Lives. And then there’s also a movement called Public Interest Technology that I’ve gotten involved in in the past couple of years. And so this is exactly what it sounds like. It’s technology in the public interest. It’s making better government tech. It’s questioning algorithms when they’re used to make unjust decisions. And public interest technology also has a special focus on those whose voices have not been heard traditionally.
JVN [00:34:40] In your opinion, do you think that having more diverse computer scientists and more people working their way into these upper echelons could help to dismantle it from the inside as well? Or do you think that the machine is so powerful and so nasty that it would, like, convert people trying to do good from within it?
MEREDITH BROUSSARD [00:34:58] Well, I think we need more Black computer scientists. We need more POC computer scientists overall, but we also need the institutions to be less racist. So, for example, Timnit Gebru, Black computer scientist, one of the most talented AI ethics researchers in the world, was working at Google, you know, great hire by Google. Her colleague Margaret Mitchell, also wildly talented. And then Google, like, made it a bad working environment for them and fired them both. So, you know, it doesn’t really help if you’re going to get people in the door, if you’re then going to make them miserable.
JVN [00:35:40] They actually, the way that I read about it, it’s, like, Timnit found all this, like, fucked up, racist shit in their algorithm. And then when she rose it up the ladder, she was like, “Hey, we need to look at this stuff.” They were like, like, “Don’t rock the boat like that.” And then she was, like, “Bye.” And so but yes, I mean, when you say making an uncomfortable working situation, they were actually, like, perpetuating, like, tech white supremacy and, like, tech chauvinism when one of the most talented people in the world is saying, “No, this isn’t really how it should go.”
MEREDITH BROUSSARD [00:36:09] Exactly, exactly. So the tech companies need to face up to the problems inside their systems. The AI, the situation going on when Timnit was fired was: she was publishing a paper with a bunch of co-authors that had an unbelievable number of references. And it was about how the language models that are used to figure out what kinds of Google search results you got or, you know, how Google translates things that these models have bias. Now, this is something that everybody knew, like, it was based on, you know, decades of scholarship and. It was not a-, it was not unknown, it was just the company didn’t want it published.
Another thing that Timnit was working on is something that I’ve been, I’ve been talking about for years, which is the problem with self-driving cars. Algorithms, the image recognition and vision algorithms that are used by self-driving cars are also racist. So in the same way that facial recognition technology is better at recognizing light skin than darker skin, these are the same kinds of algorithms that are embedded in self-driving car technology. And so if self-driving cars are allowed to, you know, roam free, which they’re not because they don’t work, but they’re going to hit people with darker skin more often than people with lighter skin. Why would we do that? Like, why do we need that technology?
JVN [00:38:57] There’s two things that really stick out for me here. A lot of our tech goes back that far into the 60s when people were starting to understand how a lot of these things worked, like, there wasn’t diversity in upper echelons because it was federally mandated so. Like, there wasn’t an ability for Black women to be a part of, to be a part of these schools. There wasn’t an ability for people of color to be a part of these schools because of the state sanctions, the state sanctioned segregation. So it’s like the voices that would have needed to have been involved, like, have been missing for decades and decades and decades and one year of people, like, waking up the United States isn’t going to be enough to fix that. So I just think about when people, if, if you know, if you’re listening to this and you think, like, “Oh, I mean, that sounds a little bit, you know…” No! Because it’s really not that hard to think of that when you think about how enmeshed all of these systems have been on top of each other for so long. It’s actually really very straightforward.
The other thing that you said that I think is really fascinating and I’m hearing about this more and more industries, is this idea of personal accountability not being enough? Look, can I recycle, can I try to, like, go for aluminum? Can I try to make better decisions when it comes to being green? Yes. But until the United States continues to, like, you know, ship all of its plastic away and, like, not recycling its systems, like, there is like, I think five polluting corporations that account for 90 percent of all the polluting that we do. So it’s like there as long as there are certain systems that are in place and until those systems are dealt with, all the personal accountability in the world, isn’t going to protect you or fix something.
And so one thing that I heard you saying about the way that we are right now in artificial intelligence and these algorithms is that, like, we need a systemic change and how do we get there? It’s enough individuals coming together and creating a collective to really put our power together to get these things to change, because big tech really is so unregulated and has so much money, it’s really difficult to like. I think it’s just, it’s a minute before we will all. Well, hopefully it won’t be minute. I think it is happening. But is that what I hear you saying, that we need more systemic change. And in order for that to happen, we need-, all of us need to wake up to how much we’re really being brutalized by the system.
MEREDITH BROUSSARD [00:40:06] Hundred percent, yes. OK, let me, I have two things that I want to, I want to share with you that I’m so excited to share with you. What you just said first is there’s this new book called ‘Atlas of AI’ by Kate Crawford that looks at the environmental cost of creating artificial intelligence. So you tend to think about AI as something that is going to replace, say, paper-based processes. But actually it uses a shocking amount of energy and electricity and natural resources, natural resources in order to train an AI model. So using AI is not at all environmentally friendly, which is something, which is really counterintuitive. And so I’m really excited about people starting to talk more about the environmental costs of using artificial intelligence, the environmental costs of using computing for everything.
And then the other thing I want to show you is this amazing paper by Mar Hicks, it’s called ‘Hacking the Cis-tem: Transgender Citizens in the Early Digital State.’ So this is a paper about transgender Britons who tried to correct the gender listed on their government-issued I.D. cards, but ran up against the British government’s increasingly computerized methods for tracking, identifying and defining citizens. So as far back as 1958, there were trans folks who were changing their gender on their official I.D. card and, and running into problems related to computerization.
And so I wanted to tell you this because after our last conversation, I had this just, this little light bulb went off in my head. Because we were talking about, you know, whether there is, you know, what kinds of gender are represented in databases. And I had this little light bulb moment where I realized this is something somewhere where I can be a better ally because I’m, I’m multiracial. And I remember when I was a kid, when there weren’t multiracial options on forums, I would have this, this kind of searing pain whenever I had to fill out a form. And whenever there were little boxes and I didn’t see my racial identity represented.
I realized, “Oh, wait, this is the experience that people are having when they’re not seeing their gender identity represented. And this is a problem and this can change.” So I’ve been writing about this ever since. And in fact, I have a new book coming out. It’s called ‘More Than a Glitch: What Everybody Needs to Know About Making Technology that’s Anti-Racist, Accessible, and Otherwise Useful To All.’ And there’s a whole chapter in the book about how the new frontier for gender rights is inside databases. And it’s all based on our conversation.
JVN [00:43:25] Meredith! I have chills everywhere! That’s amazing. I can’t believe that. Meredith, thank you so much.
MEREDITH BROUSSARD [00:43:35] Well, thank you. I mean, it really just, like, put it, connected things in my mind in a way that I hadn’t before. And I was like, “This is, this is important. And this is a way that I can be an ally.”
JVN [00:43:46] I love that. I also think, like one thing that I mean, this is, like, kind of there, but like just this idea that, like, gender would be something that-, it’s fluid. Like, your gender is fluid, like, some people feel aligned with their gender for their whole life. Other people it like goes back and forth sometimes and people just do all sorts of different things that I think even being able to, like, get that into a coded way of, like, or an algorithmic way of like being able to go back and change it, being able to have it be a thing that, like, isn’t necessarily, like, always in a fixed position for the rest of your life, because it doesn’t it’s not. I mean, you know, it’s your gender identity is not a fixed situation for everyone. That is incred. So one thing that I just want to so as we start to get into our are the end of our interview, what are we where do we go from here? Is there a ways that we can, like, protect ourselves from algorithm’s moving forward? Is there, like, is that about limiting data permissions? Is it about, like, literally reading all those terms of service? Like, what do we do from here?
MEREDITH BROUSSARD [00:44:53] Oh, man, if you had to read all of those terms of service, it would take years! I think that the best place to start is anywhere, anywhere you feel moved to start to start protesting. That is the right place to start. So if you are in, if you’re a student in school, you can ask, “OK, how are algorithms being used in, in deciding grades or how are they being used to detect plagiarism or are being used?” I know, right, or “How are algorithms being used to proctor exams and if so, what are the points of failure? So how could this algorithm fail and for whom?”
That’s a really good starting place, so, for example, the test Proctor Algorithm’s is a big scandal happening right now at Dartmouth Medical School because they were using something called ExamSoft to proctor exams. This is a technology that is known to have huge problems. And the technology flagged a lot of students for cheating. And yeah, some students are cheating like there is. There is cheating on tests. Yes. But the amount of cheating that the software detected was far more than the actual amount of cheating that was happening. And some of it is because of ways that ExamSoft is interacting with Canvas, which is the learning management system. And so people are being unfairly accused of cheating.
So if you’re a student and there are algorithms being used at your school, which, by the way, there definitely are, you can challenge them. If you are involved in abolitionist causes, you can, you can challenge the use of facial recognition and policing. And the ‘Coded Bias’ site has a lot of really great ways to get involved. So you can get involved with the Algorithmic Justice League, which is Joy Buolamwini’s organization. You can read more. You know, you can read ‘Artificial Unintelligence.’ Obviously, you can read ‘Weapons of Math Destruction.’ You can read ‘Algorithms of Oppression.’ You can read ‘Race After Technology.’ There are, yeah, I would direct people to the coded biased sites because there’s so many great resources there for getting started.
JVN [00:47:33] And I’m going to share with, with our listeners now what really speaks to me and what makes me want to get involved most immediately is after, you know, I have moved to Texas, I live here now. There’s a lot of really problematic things that are going on here with voting, with registration, with law enforcement. So one of the first things that I want to do after this is really learn about the ways in which my local municipality interacts with technology to arrest folks. And I think that’s really, really interesting. I want to learn more about that. Meredith, I feel so honored that you took your time to talk to us. I also feel. I don’t even know what a strong enough word is, like, I just, I can’t believe a conversation that we had turned your literal, like, scholar had, like, you’re so smart, like, just literally, like, so genius. I feel so honored! Slash like, we really need it for non-binary and gender nonconforming people, which is amazing. So that is just, ah!
MEREDITH BROUSSARD [00:48:37] Our conversation really, like, it, it sparked some things for me and, and you are just such a terrific interviewer that it made me start thinking about things differently.
JVN [00:48:50] Well, I think I can count on, like, one hand how many people we’ve ever had come back to the show, maybe two. Like, out of, like, two hundred episodes. And I think you are just one of my most favorite fascinating people to talk to. And I can’t wait to see what else you do. And I can’t wait to read your new book. And I know that we’ll have you back lots of time. So you’re, like, our resident technology expert. I love it. Yeah, you really are. I love you so much. Congratulations on your new book. And thank you. Thank you. Thank you so much for coming and giving your time to us today.
MEREDITH BROUSSARD [00:49:20] It was so great being here. Thank you so much for this great conversation.
JVN [00:49:27] You’ve been listening to Getting Curious with me, Jonathan Van Ness. My guest this week was Professor Meredith Broussard.
You’ll find links to her work in the episode description of whatever you’re listening to the show on.
Our theme music is “Freak” by Quiñ – thank you so much to her for letting us use it. If you enjoyed our show, honey, please tell a friend, even tell a stranger if you feel safe to do so, and show them how to subscribe.
You can follow us on Instagram & Twitter @CuriousWithJVN. Our socials are run and curated by Emily Bossak.
Our editor is Andrew Carson and our transcriptionist is Alida Wuenscher.
Getting Curious is produced by me, Erica Getto, and Emily Bossak.
May 31, 2023
Guest Melissa Murray
In the coming weeks, the Supreme Court of the United States will hand down decisions that could have major implications for LGBTQIA+ rights, racial justice, tribal sovereignty, and beyond.
May 24, 2023
We’re dripping in jewels this week on Getting Curious! What does it mean for a diamond to be “hard”? Are lab-grown gems made to perfection? What’s the difference between rubies and pink sapphires?
May 18, 2023
Guest Kathryn Olivarius
New Orleans was one of America’s most important cities in the early 1800s. It was also one of the most deadly. This week, to mark the new season of Queer Eye, we’re exploring New Orleans history with Dr. Kathryn Olivarius in a special two-part episode.