Silicon Valley Struggled Over What to Do About ISIS Fanboys | Joe Rogan and Renée DiResta

100 views

5 years ago

0

Save

Renée DiResta

1 appearance

Renée DiResta is the Director of Research at New Knowledge and a Mozilla Fellow in Media, Misinformation, and Trust.

Comments

Write a comment...

Transcript

Don't feel bad. I'll get you one. Hold on a second. Who's seen all this stuff? What do you mean? Obviously Facebook has checked this out. I'm sure Twitter's aware. What has the reaction been? And is there any sort of a concerted effort to mitigate some of the impact that these sites have? Yeah, lots of it actually. So I think in 2017 was when we started, like we being independent researchers, I guess, people on the outside of the companies, academics, began to find the content, really began to, investigative journalists would identify the name of a page and then me and people like me would go and we would scour the internet looking for evidence of what was on that page. So I found a bunch of the stuff on Pinterest, for example, wrote about it. Guy by the name of Jonathan Albright found a crowd tangle data cache. And with that we got the names of a bunch more pages, a bunch more posts, and we had some really interesting stuff to work with. Originally the platforms were very resistant to the idea that this had happened. And so as a result of that, they were in, you know, there was a, the first thing that Zuck said in 2016 when Trump gets elected, Twitter, it goes crazy that night with people who work at Twitter saying, oh my God, were we responsible for this, which is a very Silicon Valley thing to say. But what I think they meant by that was their platform had been implicated as hosting Russian bots and fake news and harassment mobs and a number of other things. And there was always the sense that it didn't have an impact and it didn't matter. And so this was the first time that they started to ask the question, did it matter? And then Zuck made that statement. Fake news is a very small percentage of, you know, whatever on Facebook, the amount of information on Facebook. And the idea that it could have swung an election was ludicrous. So you have the platforms, kind of the leaders of the platforms digging in and saying it's inconceivable that this, you know, could have happened. And as the research and the discovery begins to take place over the next nine months or so, you get to when the tech hearings happen. So I worked with a guy by the name of Tristan Harris. He's the one who introduced me to Sam. And he and I started going to D.C. with a third fellow, Roger McNamee, and saying, hey, there's so much, there's this body of evidence that's coming out here. And we need to have a hearing. We need to have Congress ask the tech companies to account for what happened, to tell the American people what happened. Because what we're seeing here as outside researchers, what investigative journalists are writing, the things that we're finding just don't line up with the statements that nothing happened. And this was all no big deal. And so we start asking for these hearings. And actually, myself and a couple of others then begin asking them in the course of these hearings, can you get them to give you the data? Because the platforms hadn't given the data. So it was that lobbying by concerned citizens and journalists and researchers saying, we have to have some accountability here. We have to have the platforms account for what happened. They have to tell people because this had become such a politically divisive issue, did it even happen? And we felt like having them actually sit there in front of Congress and account for it would be the first step towards moving forward in a way, but also towards changing the minds of the public and making them realize that what happened on social platforms matters. And it was really interesting to be part of that as it played out. Because one of the things that Senator Blumenthal, one of the senators did was actually said, Facebook and Twitter have to notify people who engage with this content. And so there was this idea that if you are engaging with propaganda content, you should have the right to know. And so they started to push messages. Twitter sent out these emails to all these people saying, you engaged with this Russian troll. And Facebook created a little field, a little page that told people if they had liked or followed a troll page. So it was really trying to get at making the platforms accountable. But they did it outside the platform through email, huh? Which is interesting because I would never read an email that Twitter sends me. You're like, this has just got to be nonsense. I didn't get one, so maybe I guess I just got lucky. I might have had a multiple day back and forth with some Russian troll. But that was I think one of the first steps towards saying like, how do we make the platforms accountable? Because the idea that platforms should be accountable was not a thing that everybody agreed on in 2015 when we were having this conversation about ISIS. And that's where there's the through line here, which is, and it does connect into some of the speech issues too, which is what kind of monitoring and moderation do you want the platforms to do? And when we were having this conversation about ISIS, there was a not insignificant collection of voices that were really concerned that if we moderated ISIS trolls on Twitter, they're not the beheading videos. They were sort of universal agreement that the beheading video should come down. But if we took out what were called the ISIS fanboys, which were like 30, 40,000 accounts at their peak, that we would, yeah, there's a document called the ISIS Twitter. For anyone who wants to actually see the research done on understanding the Twitter network in 2015, there was a sense that like one man's terrorist was another man's freedom fighter. And if we took down ISIS fanboys, were we stifling their freedom of speech, freedom of expression and like goodness, what would come next? And that, when you look at that fundamental swing that has happened now in 2018, 2019, where there's that same narrative because originally no moderation was taking place and then now there's a feeling that it's kind of swung too far in the other direction. But the original conversations were really how do we make Twitter take responsibility for this? And legally, they aren't responsible for it, right? They are legally indemnified against the, they're not responsible for any of the content on their platforms. None of the platforms are. There's a law called Communications Decency Act Section 230. And that says that they're not responsible. They have the right to moderate, but not the obligation to moderate because they are indemnified from responsibility. So the question becomes now that we know that these platforms are used for these kinds of harms and they are used for this kind of interference, where is that balance? What do we want them responsible for monitoring and moderating? And how do we recognize that that is occasionally going to lead to incorrect attributions, people losing accounts and things like that? So. Yeah, they're in a weird conundrum right now, where they don't, they're trying to keep everything safe and they want to encourage people to communicate on the platform. So they want to keep people from harassing folks. But because of that, they've also, they've got these algorithms and they tend to miss very often, like this whole learn to code fiasco, where people are getting banned for life for saying learn to code, which is about as preposterous as it gets. I think the learn to code fiasco is going to be the tipping point where a lot of people in the future, when they look back on when did the heavy handedness become overreach, learn to code. Because, I mean, Jesus Christ, I mean, if you can't say learn to code, I mean, I look at my mentions, I mean, on any given day, especially like yesterday, I had a vaccine proponent. Yeah, I watched it Peter Hotez. Yeah, Peter's great. And, you know, and it seemed like what was really disturbing to me was like the vast majority of the comments were about vaccines and so few about these unchecked diseases that are running rampant in poor communities, which was the most disturbing aspect of the conversation to me. That there's diseases that rob you of your intellectual capacity that are extremely common that as many as 10% of people in this poor neighborhoods have almost no discussion. It was all just insults and, you know, you fucking chill and this and that. You know, it's like... I know, my mentions are going to be interesting, but... Oh, they're going to be a disaster today. I know, I know. Well, let me... I think that one of the challenges for the platforms is a lot of things start out, like learn to code. I remember, you know, I watched that play out. Covington Catholic was another thing that, I mean, God, with learn to code, there was some of the people who were trolling and just saying learn to code and, you know, whatever, you don't have a right to not be offended. But then there was the other accounts that kind of took it that step further and began to throw in like the ovens and the other stuff with learn to code, right? Yes. And that's one of the challenges with the platform, which is if you're trying to assess the... Just the content itself, like if you start doing keyword bans, you're going to catch a lot of shit that you don't want to catch. Right. But the flip side is if you... You know, this is the challenge of moderating at scale, which is what side do you come down on? Do you come down on saying like 75% of people with hashtag learn to code are just, you know, not doing anything incredibly offensive and then the 25% who are, they really change the tone of the overall campaign and the hashtag for the entire community. And that's where you see Twitter, I think, come in with the more heavy handed and just shut it down kind of thing. I don't know that there's an easy answer. I think that we are, you know, even today, what was the latest kerfuffle? Elizabeth Warren got an ad taken down on Facebook and then there was a whole conversation about was Facebook censoring Elizabeth Warren. I personally didn't think that it read like censorship. What was the ad about? It was an ad about, funny enough, her platform to break up Facebook. Whoa. So Facebook took that down? Like, yeah, listen Hooker. It was kind of, it seemed like... It sort of read more like a cell phone. Like she had a picture of Facebook's logo in the image and that violates the ads terms of service. And the reason behind that is actually because Facebook doesn't want people putting up ads that have the Facebook logo in it because that's how you scam people. Sure. It's a great way to rip people off. And so probably just like an automated, you know, an automated take down, like an automated, like it halts the ad. You have to go and make some changes and then you can push the ad back out again. But it just happens at a time when there's like so little assumption of good faith and so little assumption of such extreme anger and polarization and, you know, assumption that the platforms are censoring with every little kind of moderation snafu. That it makes it, I think, I don't know how we have the conversation in a way that's healthy and looks towards solutions as opposed to the left screaming that it censored, the right screaming that it censored. The platforms, you know, trying to get around how do we both moderate and not moderate, which is a tough position to be in. I think, yeah, I don't have any good answers. No one does. No good answers. I think that's a good issue. And Vija discussed that pretty much in depth, which she was saying this is about moderating and scale when you're talking about millions and millions and millions of posts and a couple thousand people working for the organization and then algorithms and computer learning that's trying to keep up. And that's where things like learn to code and people are so outraged and pissed off because when they do get banned, they feel like they've been targeted. They really just ran into some code and then it's really hard to get someone to pay attention to your appeal because there's not enough people that are looking at these appeals and there's probably millions of appeals every day. It's almost impossible. Yeah. And there's, you know, depending on which side you're on, you also hear like, this person is harassing me and I'm demanding moderation and nobody's doing anything about it. Right. Yes. And it's definitely, I think, gotten worse. It's interesting to look back at 2016 and wonder how much of the where we are now is in part because not a whole lot happened in 2016. In 2016, it was, or 2015 in particular, very light, like almost no moderation, just kind of let it all hang out there. And I look at it. I look at it now, particularly as it evolves into this conversation about free speech, public squares and what the new kind of infrastructure for speech, what rights we should expect on it. It's a really tough, you know, I think some of it is almost like the people who are, who hear the words free speech. And they just assume that it's people asking for a carte blanche right to harass and saying, you know, how do we balance that? You think Jack and Vijaya were saying this on your show. How do we maximize the number of people who are involved, make sure that all voices do get heard without being unnecessarily heavy handed and moderating a thought or content and instead moderate behavior and instead moderate particular types of people. Particular types of signatures of things that are inauthentic or things that are coordinated and looking at this again gets to disinformation too, rather than trying to police disinformation by looking at content, really looking instead at actions and behavior and account authenticity and dissemination patterns because a lot of the worst trolls.