Skip to Content

Renée DiResta at ComNetworkV

ComNetV Keynote

Renée investigates the spread of malign narratives across social networks, and assists policymakers in devising responses to the problem. Renée has studied influence operations and computational propaganda in the context of pseudoscience conspiracies, terrorist activity, and state-sponsored information warfare, and has advised Congress, the State Department, and other academic, civil society, and business organizations on the topic. At the behest of SSCI, she led one of the two research teams that produced comprehensive assessments of the Internet Research Agency’s and GRU’s influence operations targeting the U.S. from 2014-2018. Renée is the author of The Hardware Startup: Building your Product, Business, and Brand, published by O’Reilly Media.

Below, watch the video, listen to the podcast, or read the transcript.

Watch

Transcript

Vidya:

Good morning, communications network. A really terrific session. I am the director of communications with the foundation in Northern California. Wherever you are joining us, I hope you are happy. We will take your questions for Renée DiResta. She is one of the premier researchers studying the spread of false and malign media. Thank you so much for being with us, Renée. You have shared some of your research with the Senate and Congress and policymakers, and helping us understand what is happening with false narratives and how they are affecting the issues we care about. So we are happy to have you.

Renée: It is great to be here.

Vidya: So, I just want to say, I have been reading your work in The Atlantic and I see the research you did, I know from talking to you, as a mother who discovered the world. Can you tell us more about that story and how you started this journey down into the rabbit hole of this information?

Yes, sure. I had my three kids as of six weeks ago but my first was in December 2013. I live in San Francisco, where you have to put your kids on the waiting list. Anyway, I knew that San Francisco had an anti-vaccine problem, so I looked at the vaccination rates for the public schools and I was surprised and horrified. I actually read a post, and I made some visualizations and put it into the world.

And I called my congressman, which I have never done before. I asked, “Can we do something about this?” And he said no. And the measles outbreak started, and I called back again and he said, “We will do something. And if you would like to be involved as a parent voice, we would love to have you.”

So I wound up getting it to the other women and we made a Facebook page, because this is what we thought you do when you become an activist. So we made a Facebook page called Bethany California. Then we had to get an audience, so we started running ads. And we made a Twitter account. Then we knew the opposition was far better organized. They had been doing this for quite some time. … We really did a lot of quantity analysis on how this movement had grown. And we wound up getting a law passed, the bill we were working on to get the vaccination rates, and that happened. We felt that we did public service there.

What happened next surprised me. I got this comment from someone who worked for the Obama administration, saying, “The analysis you are doing is what we are doing on ISIS. And that is because the same tactics that you are describing, and the way the engines are promoting it, we believe it is what we are doing, and would you mind coming down to talk about it?” And I thought, I know I am not an expert, but this is indicative of the feeling that I had, in the future of campaigning, that everything will be — who had the largest number of fake accounts, and whose content… there was an element that you had to find them in different research tools. There is something about the message, or the way the message is spread, that is not authentic. Propaganda — I include because it is a type of propaganda. Propaganda is information with an agenda. Something I think the definition has always been, somewhat — but it is something to understand when we are on the internet and people are competing for attention. So there is a barrage of information out there all of the time. Understanding what modern-day propaganda looks like is an important research angle. And alas, it is a term we actually tried not to use because it is politicized. It is the thing that many people heard first.

And when the term came about, fake news was demonstrably falsifiable. It was things against the Pope, versus Donald Trump, news stories that were going viral and trending on Facebook and Twitter and other places that were untrue because they were so sensational. So outrageous. People would click on them and they would be shared along. Because the internet is designed to be like this. One of the great promises is that anyone could say what they wanted, you could start a blog and put your message out there to the world. This is the thing that would expand the discourse, and previously underrepresented voices, and this would be a wonderful thing.

The problem is, as people began to realize it existed, the manipulative, bad people came in to use it also. That is a dual nature. A tool in one person’s hand can be a weapon in another person’s hand. When we look at who’s running the campaigns, they are using the same infrastructure: Facebook and Twitter and the entire plethora of social networks. So we see state actors. We see domestic partisans. This is particular in election 2020. We see extremists — as I mentioned ISIS a little earlier. Scammers — they are remarkably innovative because they have financial motivation. So, the newest tactics that we see are not from state actors; they are from scammers, and we can talk about that if you are interested. There is some remarkable stuff that was done during the George Floyd protests by people who wanted to earn ad revenue. Think conspiracy theories. A certain dynamic that they have that sets them apart from other groups. That is passion. A passion where they are always determined to communicate about the thing that they believe. There is no counter content too, to bump up against that. There is no counter speech, because in large part, we can use different things, and no one gets on Twitter to talk about how the earth is round. When you talk about the shape of the earth, you will get a box to let you know, but depending on the platform and how well the search functions, you will in fact get a lot of answers to the question. That is the dynamic with the new entities that can use the same information.

Vidya:

What is the purpose here? You described some people [who] were motivated by money. They were trying to drive up ad revenue. I saw that in 2016 as well. But others are trying to push an agenda. What are they trying to do with these different types of campaigns?

That is a great question. State actors are not always attacking other countries’ citizens. Oftentimes the things we see from state actors is operated on its own population. Particularly in foreign countries. Or there is a party in the country that chooses to use these platforms to maintain their hold on power. A lot of times [what] you see from extremists is, this is about power. If you control the channels of communication and what people see, and the way they engage information, and encourage an ecosystem where you can control what is in their future, it is shaping and nudging them in a particular direction. By controlling that, you have in a sense controlled what your targeted communities are seeing. That is a remarkable power, particularly if you are nudging them toward an election or taking a course of action. Or sometimes it is not about nudging them. It is about distracting them. There is a series of goals that they have here. And they use them to distract. To distract and divide. I will talk about distraction because that is something else people do not really have the same awareness of. Not all the activity is designed to persuade you to change your vote or dislike someone or something else. A lot of times with distraction, there is a recognition that as we get our information from the internet, if there is an unfavorable story about you, you flood the zone and you can change the topic of conversation and distract someone from that topic. So, if you have ever gone on Twitter, you see there is a 20-second news cycle. There is a battle where you are being barraged by narratives or counternarratives or “look over here” dynamics.

In the U.S., it is still different groups and stories and major media interacting. But oftentimes in authoritarian countries, you will see, that dynamic of distraction is used to maintain a relationship to the people, or maintain the rigid projection, like a reputation, in the broader community of countries. One example might be, when Khashoggi was murdered, all accounts linked to Saudi Arabia appeared on Twitter and they flooded the timeline and dominated hashtags with alternative explanations to what happened. In the days prior to when they had a full understanding of what happened to Mr. Khashoggi, it was out there for the public, there was a barrage of stories about he left, he went here, or something else happened. So if you flood the zone with that and you make people internalize the narrative, they believe. They come to accept [that] one particular thing happened, and when the truth comes out after the fact, the truth is competing with the pre-existing narrative. There is a sense that it is too hard to go through all of the narratives to figure out what is happening. So that is an interesting dynamic that social platforms inadvertently made possible.

Vidya:

What’s most interesting and disturbing is that the intent and the impact is really to degrade the whole notion of facts and truth. And they are [casting doubt on the] answers and muddying the narratives so nothing can be legitimate. I think what you just described in the Saudi murder example is exactly that. Were you able to attribute them to the government? Did you know who was behind them?

No, that is a great question. Attribution is hard. The way this works… I am at the Stanford Internet Observatory. We work with the platforms where we can’t work with civil society and government and journalists. We believe that an interdisciplinary stakeholder process is the best mechanism for understanding the information, because different people see things at different times. Civil society groups — those who serve targeted communities — will oftentimes have a sense that something is not quite right. Some content is making its way, like fake accounts, into their communities, and that will be the first red flag of what is going on.

But what winds up happening, a lot of times, the attribution piece requires some sort of digital forensics in order to be confident. We always have a confidence level of what we think has happened and who do we think is behind it. So, for something like Russia, while outside researchers can raise a flag and say, according to what is called tactic techniques and procedures, according to these behaviors and this contact, these sites are narratives, we believe this is likely to be affiliated in some way with a Russian operation. That in turn will go to the platform companies, into where the devices are logging in from. What connections are behind the scenes? Who is clicking on what? Who’ll open what on what laptop? So, knowing whether they are using a VPN. We have a lot of this dynamic that is taking shape, where the process of attribution is coming from the joint assessment of a bunch of actor types.

Vidya:

So I am glad you brought up the platform and the work that they have been doing. And the work to identify, and to take down the bad actors. Like a lot of people, I recently watched The Social Dilemma, the docudrama that’s getting a lot of eyeballs. It’s getting criticism from insiders who helped to build social platforms. But it is a terrifying platform on how spreading these narratives is as much a feature as a bug, because of the way recommendation engines, which you described, operate. What do you think tech companies should be doing this aside from taking this down? What could they do to corral these false narratives?

A lot of times takedowns are not the best course of action. There are different areas where this takes shape. It is not just elections or politics. COVID-19 has become a huge challenge for all of us; it has really revealed some significant lapses and how this information can go viral. For me it feels like a go-back to 2018. But I think, the challenge for the platform is the dynamic of reality. By the time the story has gone viral, it is over. And that interesting challenge — more about platform — is, what do you do about managing information on an environment that was designed to facilitate freedom of expression, as we talk about removals? The problem is a shocking statement: We have value. The creation process is run by this. It’s not surfacing the things we want. It is effective at finding correlations between people who hold one belief and people who hold another and pushing those people together, with the example of the anti-vaccines — not the health information, but the “vaccination is government overreach and a violation of your civil liberties.” The people who were members of the communities were being referred out to militia groups, and you saw this intersection of people who share one angle of their thinking, and the platform recognizes that and pushes the people together, without really understanding what it is they are nudging. There is a statistical likelihood between community A and community B, they may enjoy their content, so let’s put them in the same room.

So the challenge, with this huge topic — how do you design a correction means how do you decide which behaviors are appropriate? When we talk about network activism, for example. The stuff I was describing in regard to what we we’re doing on vaccines. In California, anyone can do it and run ads. So there were no guardrails on the system. So anyone could use the tools. So the question becomes, if you don’t want the platforms to decide, then they have to be more adept at who they are nudging together and what are they amplifying and what the downstream harms are. For very long time the only notion of a harm was, Is this an immediate incitement to violence? And the question is, if it is that, that is not the only type of harm.

Pushing out content saying, “Do not wear masks,” “This is a really bad flu,” and those narratives, and the ones that say the coronavirus doesn’t exist, that has really had an impact on public health and the community. So the question then becomes, that is not immediate incitement to violence, so how should that speech be treated? That is where we are today. How do we decide, in this world of very fast-paced viral commentary? How should the platforms engage? And potentially, in my opinion, one of the best things they have is to throttle it to reduce friction. That would give fact-checkers time to come in, and to help people understand this information in context as opposed to letting it get millions of views and taking it down.

Vidya:

What is your advice to people who are storytellers and truth tellers, and in most cases the people running the social media channels and pages? We are trying to privilege and promote their messages and improve the outcomes. What is your advice for how we should be engaging?

First, I think it is important that people realize, even if you are in business, I try to find the appropriate word. Even if you are not in politics or something that you would think would be heated and confrontational, these narratives do find various industries in various ways. Like Wayfair found themselves getting dragged into a human trafficking conspiracy by way of a filing cabinet, and it got turned into a whole story on the internet. A story that they were somehow participating in this. This is a furniture company. That meant the furniture company had to figure out what was going on. Why were people mentioning them? How would they respond, and what could they put out to diffuse the narrative? They had to decide how to disengage.

I think the unfortunate reality is, these could pop up in very inappropriate ways. Corporations who have a strong component that does a lot of social justice work or choose to amplify marginalized voices — they also attract the online culture and they use these tactics also. So I think, it is really important if you are a corporation — because oftentimes it is the communications person who has to deal with this — how do you understand what the narratives are that may impact your brand or your industry, and a prearranged game plan for what happens, if this happens to me or another person in my industry. Bumble Bee Tuna got dragged into a conversation about protesters because of something the president said, that people were throwing cans of tuna. And it was Bumble Bee specifically. They found themselves in this position where people were adding — editing their Wikipedia pages. It becomes a big internet thing.

Vidya:

For our community, most of the time, those issues, those topics, that is where we live. We live in the area of public health. How do we lift up the voices? Things that are not always considered? So it seems to me, what I’m hearing from you is there is really important work to do, and monitoring so we can spot this information, because it is not just the biggest issues of the day. It is every issue. But I also wonder: You wrote earlier this year on COVID-19, about how false narratives were moving more quickly than public health experts. You had advice on how they could do things differently.

Yeah, the challenge for COVID-19 and for a lot of people, the internet will rule the speed. So, it will keep churning. Which means if you are not out there with a message, people will fill the void with content about the topic you should be engaging on. So, for the CDC and World Health Organization, they are accustomed to doing a briefing at a press call, they put out a tweet once or twice per day that has a graphic in it. They wait until they are absolutely sure of something before they communicate it to the public. Previously this was a good thing. You want your scientists to be sure of what they are saying.

The problem is, the way we communicate today is such that the conversation is already going, whether you are part of it or not. The internet voices are very much out there. There were people on social media saying, “We should use masks, and here is my assessment based on the research.” They were not institutional authority voices, they were scientists of some sort who had done the research and were communicating to the public. The CDC remained largely absent from the conversation. So when they did change guidance it looked like it was leading from behind.

So there are opportunities to participate in the conversation by saying, “Here is what we know now. Here’s the guidance we are giving. And there’s a likelihood this guidance will change as more information becomes available to us.” I think that, rather than the older theory of waiting until you are sure… people are more accustomed to hearing the information coming out in real time. They want a response. That is a real problem. The platform companies have to be sure what to fill the search results with in the absence of unverified reputable information from an authority figure. Seeing those institutional entities recognize this is how communication is happening. This is how information moves. There’s an opportunity for them to adapt more and get their truth tellers more comfortable speaking the language to people on social media, the language they are more used to hearing. This is what we know today. It may change, but this is what we know today. And I think we as a society have to get more comfortable with uncertainty. The right answer doesn’t need to pop up to the feed.

Vidya:

It sounds like we have to get more comfortable with experts taking to social media and participating in conversations. I do want to open the conversation to other people who have a question in the chat. Tell us your name, the organization where you are from, and if you have a question for Renée DiResta, and we will get your questions answered. You mentioned political divides and societal divides as one area of — attraction, let’s say — for bad actors who want to spread misinformation. I just wanted to hear more from you on how we should really think about the intersection of race in general.

Race is important. Gender, I think, I have interesting thoughts on that. But race is well understood. On the Russia operation, we will use that as an example. There are real racial underlying problems in this country and have been for a long time. Entities that want to divide society, that want to exasperate American unrest — they have the opportunity to do so by using real underlying issues. There’s no reason to fabricate an issue when you have a real moment happening in the country today, or four years ago.

So, what we would see, four years ago, a lot of it was about the statues. That is where Russia got an early start. It focused on the … Confederate statues, and they were pushing narratives into their Southern and South United and Texas groups, to push content to those communities, talking about how their heritage is being destroyed. Meanwhile they were going to … groups they made for pretending to be Black activists, saying, “Look, this country hates you and here is evidence. They can’t even agree to take down the statues that are monuments to your enslavement.” That is a real national moment that is happening, real video footage of the statues coming down. So they are using that to create digital attention to exacerbate a real underlying issue that is happening. And the challenge there, these are real issues.

So, the fix for them is not just to kick Russia off of social media. It requires deep work as a nation that we need to do to come together. The problem, if you open your social media feed and all you see are posts reinforcing, that does create a feeling that this turmoil is everywhere, all of the time, constantly. You can see this today with some of the recent protests. So if you go to your Facebook Watch page — Facebook has been promoting Watch — one thing you will see is the protest footage. But one of the interesting dynamics happening there is, wherever there is a protest anywhere, even six people yelling on a corner, there is a rush of feed there. So they are taking the footage and constantly pushing it out on the Watch tab, even in a way that American media is not doing it. But if you are following these channels, what you see in your feed is push notifications, protests, and go and watch it to create a feeling that the world is burning. And this is where, for those of us who study this, the challenge is real. There is real content. There are real atrocities that we want people to be aware of, but at the same time there are actors who have an ulterior motive that have become the dominant channel for pushing this stuff out. So again, it is the line between what is really happening and the propaganda created around it. For race, it is a poignant moment for us in the country. I would say for gender, there is less sophistication. We have seen Russia run fake feminist pages. They were never very good. They never quite got it. They never got very big. It was funny. I’m trying to think of a way to say this. But, it was an inspirational slogan that they made. You know what I’m talking about.

Vidya:

Live, love, or laugh.

I thought it was funny, their vision. They had all of this stuff. So, I think, we haven’t seen quite as much on the gender front. But anytime there is attention to be exploited, if there is revisiting, you will see that opportunity and take advantage of that tension. Insert yourself into the conversation and make it seem a lot worse than it actually is.

Vidya:

When you — back to when you say they haven’t been as successful — you mean they haven’t been successful with what they were doing except the race divide.

They were able to do good and they could grow the size of the accounts. When they lost their accounts, they would make new ones. There was a process of developing the next persona. The ones focused on race tend to be the ones that attract the most attention. And actually get retweeted by the most prominent people. They are always trying to get real influencers, real blue-check Americans who have validated identities and large followings. One goal was to get those people to amplify your message. So these men on the streets trolling, looking to get retweeted by the president or a prominent political figure or prominent celebrity. And they did make that happen on a number of occasions. I think that is important for our community. How can we be conscious when amplifying messages? Because they show up.

Vidya:

When you amplify other voices, how do you make sure you are not spreading the information or work of bad actors?

We are 47 days away from an election.

Vidya:

Facebook said they may take measures to clamp down on the information around election day. What you make of this? What are they saying? What should we think about? What do you see happening? What will you be looking for on election day?

We have a partnership at Stanford called the Election Integrity Partnership. It is us; the University of Washington; Graphika, which does social media mapping; and a group called the Digital Forensic Research Lab. We are core members, and then there is a broader ring of partners in civil society and government and state and local governments as well, responsible for election integrity. We have chosen to focus very narrowly on voter-related narratives — things that would constitute voter suppression, misleading information about mail-in ballots, … and other stuff. What we are looking to do, to see the narratives as they emerge, and we have early detection tools that we prioritize. And we use analysis, people on our team who will look at those things as they come in and triage them and decide, is this narrative growing? You don’t want to spend your time in some random group that is not an issue. People are wrong online. There’s bad information. What you want to do is look at the stuff that is actually gaining traction or has the potential to be influential and misleading. How do we think about that?

One of the things we can do is surface things and say, this appears to be hopping from Facebook to Twitter. Maybe Facebook and Twitter need to take a look at what’s going on. The platforms cooperate with each other. At this point we have a relationship where we can cooperate with outside researchers. So we have this multi-stakeholder approach to triaging these narratives. … The platforms do not seem to be adept yet at saying, “This is going viral, let’s make a decision now.” Or “This is going viral, let’s reduce the number of people who are seeing those shares,” while they have an opportunity to come in and provide a fact check, so as the video continues to be shared, the accurate fact is presented alongside it. Or if they will take something down, having a clear policy that they can do when they do the takedown, so it does not seem like an ad hoc censorship. And it can prevent something deeply harmful from spreading.

We are treating this like an all hands on deck. Some big concerns are that the integrity of the outcome will be called into question, regardless of who wins, that the American public will spend the next 40 days fearing that the election is illegitimate because of misinformation, and things will vary depending on what side of the aisle you’re on. We want to make sure people have confidence in the outcome, and that is what we are trying to ensure alongside our various partners.

Vidya:

I want to get some questions from the members of the network. Are there any success stories we can point to?

I don’t have one off the top of my head. Americans are wearing masks. More than half of them. I think, some of those narratives related to coronavirus, staying inside, sheltering in place did reach the target audience. People did trust the authorities to an extent. We are at record lows with so many different institutions with the media, the government, health and other departments. I think the real challenge is, how do we restore that trust offline? So the social media part, it exasperates it but it is a reflection of the underlying problem.>

Vidya:

You mentioned a little while ago: People are baffled and interested; some were asking about QAnon. What is your take on that?

It started a few years back, and it was an outgrowth of the people who started any conspiracy theory. Gradually … it obtained popularity … you see people going outside with the shirts on — and gradually extended the core. … I won’t go into the specifics, but what has happened at this point, when people see contact, there are many people, myself included, [who] think this is a very real problem and people need to be aware of it. The problem is the real authorities that are doing this work, the National Center for Missing & Exploited Children, they are … doing good work, but this is commandeering attention from the real work. …

You do see people saying, “I want to save the children, so I will forward this along.” That is why people are sharing this content that ties back and routed into this wild crazy narrative. They don’t realize they don’t understand the problem, so they see this meme come across their screen and they want to share it. Then when this goes viral, the theory amongst the promoters is people will find out the truth by tracing it back to the content. That is the dynamic happening today. So the numbers are growing.

When we talk about influence, it is hard for people who study social media to have concrete assessments. We can see engagement numbers. I can tell you how fast groups are growing. How content spreads. How many likes and shares. I can tell you what community it is in. But I can’t tell you if it changes someone’s mind. This — did this change the way they feel?

That is a problem with the Russia assessment. I can tell you [there were] 127 million engagements on Russia content related to the election, but I can’t tell you how many people … change their mind about a candidate or made them take an action or not take an action predicated on what they saw. That is the challenge we have as researchers. Something has to happen where other types of social science researchers are actually asking people, “Did you see this? How did you feel about it?” To do a better job of understanding the influence and how it impacted. We are really trying to understand how online content and assumptions lead to certain offline behavior.

Vidya:

This is what you need before you can take an action. While we are taking the time to understand those things, these narratives are really flying, and growing in a different way. So that is a huge challenge. A couple people are asking, and I think, a couple of questions about the ethics of online engagement, because some of the tactics that we talk about that are being collected by malign actors are tactics that people in our community want to use. We want to have more people engage with our content. So how do you think about the ethics of what we are doing on the web? One … person asked, Is it OK to brainwash people to, say, a climate-friendly lifestyle? What are your thoughts on that?

I wish I had an answer to that. That is one of the key challenges that we are struggling with. In the very early days, we talked about this. Back when I was working on the ISIS issue, the real belief was that freedom of expression was the highest good. And if you had to take down ISIS — what would you do if U.S. government asked to take down the accounts? What if China asked? A lot of debate was happening in 2015. We kind of moved past that, to re-evaluate.

I think the ethics… you don’t want to be in a position where we will accept this manipulative thing if we think the ends justifies the means. I think the challenge, particularly in election 2020, is that there were lines established around what would come down, when we weren’t looking at porn actors. So there is a concept called behavior. When we look at this, we look at the content. That doesn’t mean the narrative. We won’t say this narrative is true and this one is false, it is more an understanding of, is the content coming from websites that came up yesterday? Is there something prominent that is dubious? Is the content inflammatory or harmful? Is it the core thing? Then we look at the source. So who is putting these narratives out into the world? Are the accounts authentic? Do they exist? Or is it an Astroturf campaign, where the people who are saying this do not exist? We look at the dissemination pattern. Is it something that looks manipulative? Does it look like it is being deliberately put out with an intent to get an algorithm? Does it look like there is mass coordination across 25 pages that do not disclose they are owned by the same person to amplify content? Which is sort of a stem tactic but something we see. And this is the real problem. The question becomes, where is the bright line when the accounts are authentic and when the content is real and legitimate and not a fly-by-night operation? The question becomes, when we look at the patterns, what do we think is OK? … So we are trying to figure out where the lines are. And that is, unfortunately, the frontline of the policy debates today.

Vidya:

I guess we are getting close to the end of our session. I want you to look around and help us think about — I feel like, even though social media has not been as prevalent in our life… it is a recent phenomenon and how we consume information… it has changed the way people connect with each other. And a lot of good things. What are your hopes for how we grapple with the challenge of the false narratives? … Are these genies out of the bottle? Are there more things we should think about? We in this community, not just as leaders but as voters, citizens, parents, and neighbors, how do we think about what comes next and how to make sure it is helping the system?

I think it is education to a point. Maybe only six years ago, of this current environment. Propaganda is not new. It is hundreds of years old. It started during the Catholic Church. The word comes from [a term meaning] to counter the narrative. This is a very old phenomenon. Propaganda involves the environment of the day, so it looks different when it was printing press, then television, and now the internet. And the social internet, which is different than the blogs there. So as that evolution has happened, I think the major change is that we are all active participants now. That wasn’t true in the other environments.

So in this era of radio and television you could hand the article off, but you weren’t an active part of that viral dissemination. This is where I think we have a responsibility to think about what we are sharing, particularly if you are an account with a lot of followers. You have a responsibility. I think people need to internalize that. Some people say, “I just put it out there.” No, that is not how it works. You are actively participating in your role, you worked hard to develop a reputation as a thought leader, whether through your corporate or personal channel. It has impact, what you put out there. It pushes the narrative along — and taking the time to make sure, is this accurate before I share it? Am I sharing this because I feel a sense of outrage to push it? Oftentimes that is done deliberately. You talk to people and they say, “I didn’t read the article but the headline said something.” Making sure people are doing the check and understand what they are sharing, because they are furthering the propagation of the narrative when they engage and when they comment. That is something where we bear individual responsibility. And having an understanding of harm and how content should be narrated. But as far as the agency, I would say it would be the fact that our voices carry weight, and what we say and do matters. It influences what the people around us see, so we do need to take responsibility for that.

Vidya:

I think that is great. It speaks to the question of ethics. We are all participants in the system, so we have responsibilities to look at what we are sharing. Let me ask you about the role of the government. And many of us have seen the government not engage on this issue. In fact … the most impactful policies affecting the way technology companies deal with false information, it is coming from the EU. How do you think about this, and what is the role of government? You have testified in Congress and talk to elected officials about this; what is your take?

I think one of the challenges in the U.S. is the government divide is so strong. You will see commentary that says both Democrats and Republicans want to regulate. Yes, it is true; they want the complete opposite, though. Particularly on content moderation. There are a lot of people in the Democratic community and the left who believe that they are not moderating enough. They are letting too much hate speech stand, and they are allowing harassment to go unchecked, and they want to see more moderation. Then you have on the right people who are more on the free speech side and believe that as long as they are not threatening to kill someone — the definition of harm should be limited and even limited to how we think about it in terms of real-world speech. There are different dynamics to online speech that make it distinct, but they think it should fall under the same regulatory kind of constitutional framework as offline speech.

So, the idea that, when people talk about how tech firms should moderate, Ted Cruz and Elizabeth Warren say they should be regulated, but the specifics are not — there are disagreements. So the question is, can you come up with some oversight body that adjudicates some of these issues or looks to see where tech is having perhaps unintentional harm? … Is it doing something that is negatively impactful on society? — without the government weighing in on content moderation specifics and recognizing these are private platforms. This is a real challenge. In Europe, they have chosen to focus on the privacy angle. So they have instituted a series of consumer protection relations. There are trade-offs with any of these methodologies.

So, interestingly enough, it impacts security researchers. So people in Europe don’t have the visibility that some of us do in campaigns that are localized in the U.S. So … anytime you make a regulatory decision you have to consider what the trade-off will be. I think, right now, the conversation on regulation, as far as I can tell, is halted ahead of the November election … it won’t develop in a meaningful way until January. We thought it would be a major topic of conversation given the president’s executive orders and his concerns. But, instead, we have coronavirus and real-world purchase, so tech regulations have fallen by the wayside.

Vidya:

What we have shared is that the infrastructure, the technology platforms, the way they put the content that can go viral ends up having huge implications on how we as a society grapple with issues, whether it is choices about how to feel, the election, or public information, or COVID-19, or protests. Thank you so much. You have shared so much with us on how to understand a great new world of how narratives flow. And for all of us who take a huge responsibility in trying to lift the best information, you have given us a lot to think about. We have to remember, we have power and responsibility through truth tellers, and to push that in a better direction. The stakes are high. On behalf of everyone, Renée DiResta, thank you for joining us.

Thank you for having me.

Subscribe

* indicates required

Join The Network

Community, learning, and leadership to help you do good, better.

Become a member