‘Prebunking’ teaches people how to spot misinformation: NPR


A poll worker handles ballots for the midterm elections, in the presence of observers from the Democratic and Republican parties, at the Maricopa County Tabulation and Elections Center (MCTEC) in Phoenix, Arizona, October 25, 2022.

Olivier Touron/AFP via Getty Images


hide caption

toggle caption

Olivier Touron/AFP via Getty Images


A poll worker handles ballots for the midterm elections, in the presence of observers from the Democratic and Republican parties, at the Maricopa County Tabulation and Elections Center (MCTEC) in Phoenix, Arizona, October 25, 2022.

Olivier Touron/AFP via Getty Images

Officials in Ann Arbor, Michigan, Union County, North Carolinaand Contra Costa County, Calif., are posting infographics on social media urging people to “think critically” about what they see and share about voting and to seek out reliable election information.

Earlier this month, the Federal Bureau of Investigation and the Cybersecurity and Infrastructure Security Agency issued a public service announcement stating that cyberattacks are not likely to disrupt voting.

Twitter will be released soon prompts in user timeline reminding them that final results may not arrive on polling day.

These are all examples of a strategy known as “prebunking” that has become a major pillar of how tech companies, nonprofits, and government agencies respond to misleading and false claims about the election. , public health and other burning issues.

The idea: Show people the tactics and tropes of misleading information before they encounter them in the wild — so they’re better equipped to recognize and resist them.

mental armor

The strategy stems from an area of ​​research in social psychology called inoculation theory.

“The idea [is] that you can build mental armor or mental defenses against something that is coming in the future and trying to manipulate you, if you learn a little about it,” said Beth Goldberg, head of research and development at Jigsaw. , a division of Google that develops technology to counter online threats. “So it’s kind of like getting physically vaccinated against a disease.”

To test the inoculation theory, researchers have created games like Bad News, where players post conspiracy theories and false claims, in an effort to gain followers and credibility. They learn to use techniques such as impersonation, appeals to emotions like fear and anger, and amplifying partisan grievances. Cambridge University researchers found that after people played Bad News, they were less likely to think tweets using these same techniques were trustworthy.

In recent years, these lessons have begun to be applied more broadly in campaigns encouraging critical thinking, highlighting manipulation tactics, and preemptively countering false narratives with accurate information.

Ahead of this year’s midterm elections, the National Association of State Election Directors launched a toolkit for local officials with videos, infographics and tip sheets in English and Spanish. The overall message? Election officials are the most trusted source of election information.

Elected officials on the front line

“Every day people hear new rumors, misconceptions or misunderstandings about how elections are administered in their state,” said Amy Cohen, executive director of NASED. “And certainly local election officials are really on the front line because they’re there, in the community where the voters are.”

“Elections are safe and secure. We know that because we organize them,” read one graphic. “Elections are coming… so is inaccurate information. Questions? We have answers,” said another.

A tip sheet that local agencies can download and distribute offers ways to “protect against election misinformation”: check multiple sources of information, understand the difference between factual reporting and opinion or commentary, think about “purpose and on the agenda” behind the posts, and “take a moment to pause and think before reacting.”

Another focuses specifically on images and videos, noting that they can be manipulated, altered, or taken out of context.

The goal is “to tackle these patterns of misinformation rather than each individual story,” said Michelle Ciulla Lipkin, executive director of the National Association for Media Literacy Education, which worked with NASED to develop the box. tools.

A Brazilian election official examines electronic ballot boxes ahead of the second round of the upcoming October 30 presidential election in Curitiba, Brazil, October 18, 2022.

Albari Rosa/AFP via Getty Images


hide caption

toggle caption

Albari Rosa/AFP via Getty Images


A Brazilian election official examines electronic ballot boxes ahead of the second round of the upcoming October 30 presidential election in Curitiba, Brazil, October 18, 2022.

Albari Rosa/AFP via Getty Images

Other prebunking efforts attempt to anticipate false claims and provide accurate information to counter them.

Twitter has made prebunks a central part of its efforts to combat misleading or false narratives about the US and Brazil elections, the UN climate summit in Glasgow last year, and war in ukraine.

Many of them take the form of curated collections of tweets from journalists, fact checkers, government officials and other authoritative sources.

As part of its election readiness work, the company has identified themes and topics that could be “potential vectors for misinformation, disinformation or other harmful activity,” said Yoel Roth, chief security officer and integrity of Twitter.

The election prebunks “provided critical context on issues such as electronic voting, mail-in voting and the legitimacy of the 2020 presidential election,” said Leo Stamillo, Twitter’s global director of curation.

“It gives users the ability to make more informed decisions when they encounter misinformation on the platform or even outside of the platform,” Stamillo said.

Twitter has produced more than a dozen state voting prebunks, including ArizonaGeorgia, Wisconsinand Pennsylvania.

It is also published 58 prebunks before the halfway as well as the general elections of Brazil, and still has 10 ready to go. This reflects how misleading narratives cross borders, Stamillo said. “Some of the stories we see in the United States, we’ve seen in Brazil as well,” he said.

Overall, 4.86 million users read at least one of Twitter’s election previews this year, the company said.

There are still a lot of unknowns about prebunking, including how long the effects last, which formats are most successful, and whether it’s more effective to focus on helping people spot the tactics used to deliver content. misleading content or to directly address false narratives.

proof of success

Prebunks that focus on broader techniques or narratives rather than specific assertions can avoid triggering partisan or emotional reactions, Google’s Goldberg said. “People don’t necessarily have pre-existing biases about these things. And in fact, they can be much more appealing for people to dismiss.”

But there’s enough evidence to support the use of prebunks that Twitter and Google are embracing the strategy.

Twitter polled users who saw prebunks during the 2020 election – specifically, posts in their timeline, warning of misleading information on mail-in ballots and explaining why final results could be delayed. It revealed that 39% said they were more confident there would be no voter fraud, 50% paused and wondered what they were seeing and 40% searched for more. information.

“This data shows us that there is a lot of promise and a lot of potential, not only to mitigate misinformation after it has spread, but also to try to educate, to share context, to incite critical thinking and overall help people be more informed consumers of the information they see online,” Roth said.

At Google, Goldberg and his team worked with academic psychologists on experiments using 90-second videos to explain common misinformation tactics, including emotional manipulation language and scapegoating. They found that showing people the videos made them better able to spot the techniques – and less likely to say they would share posts that use them.

Now Google is applying those findings in a social media campaign in Europe that aims to derail false narratives about refugees.

“It has now reached tens of millions of people, and its goal is to help anticipate and help people become more resilient to this anti-migrant rhetoric and misleading information,” Goldberg said. “I’m really looking forward to seeing how promising this is on a large scale.”

Previous Stevie Wonder's "talking book" at 50
Next FANTINI FUTURO Immersive Electronic Opera announced at Old St. Patrick's Cathedral Basilica on November 10