• Loading stock data...

Fact-checking reduces the circulation of misinformation – we should not get rid of it

Screenshot 2025-12-22 222244

Fact-checking has emerged as one of the most prominent policy tools to combat the spread of misinformation, but critics have argued that it infringes upon freedom of expression and that it has no meaningful impact on the circulation of misinformation. This column provides evidence from the field that fact-checking significantly reduces engagement with posts on Facebook rated as false and also decreases users’ subsequent activity. The findings suggest that the impact of fact-checking primarily results not from Facebook’s actions to hide or demote content, but rather from users’ behavioural responses, challenging the idea that it suppresses free speech.

Misinformation and the circulation of fake news is a major concern for policymakers and the public at large. The public debate on the strategies to limit misinformation explores different policy options, including regulations of social media platforms, such as the EU’s Digital Services Act (EU 2023), and short-term practical and cost-effective interventions such as confirmation clicks (Henry et al. 2022), fact-checking (Barrera et al. 2020, Henry et al. 2022, Nyhan 2020), and nudges (Pennycook and Rand 2022). 

Meta (Facebook) was a pioneer in implementing fact-checking at scale with its Third-Party Fact-Checking (TPFC) programme. Working with the International Fact-Checking Network (IFCN), the TPFC programme accredits fact-checking organisations around the world – currently around 160, most of which are media outlets. Meta fully delegates the fact-checking process to these partners – identifying fake news, writing the fact-checks, and rating posts directly on Facebook. In January 2025, Meta announced the end of this programme in the US, citing concerns over potential threats to free speech. Other critics have questioned whether fact-checking has any meaningful impact on the circulation of misinformation. What can the academic literature teach us about these concerns?

One strand of the literature shows that fact-checking, though it can correct factual beliefs, does not succeed in correcting the subjective beliefs or impressions left by fake news (Barrera et al. 2020, Swire et al. 2017, Nyhan et al. 2017). While it may not be effective in changing people’s beliefs, there is a broad consensus that fact-checking can play a role in decreasing the circulation of misinformation, even when users are not forced to read the fact-checks (Henry et al. 2022, Pennycook et al. 2020,Yaqubetal et al. 2020). Guriev et al. (2023) use a structural model to uncover the channels through which fact-checking operates and conclude that users circulate less fake news, not because the information makes them update their beliefs but because veracity becomes more salient in their sharing decision.

However, most of the evidence comes from randomised survey experiments, which typically do not shed light on the possible longer-term impacts of fact-checking on citizens’ behaviour, and test interventions that might not be scalable. In our recent paper (Cagé et al. 2025), we advance this literature by providing evidence from the field on both the impact of fact-checking on engagement with false content and on the longer-term impact on sharing behaviour of having been shown to circulate fake content.

The fact-checking process

We partnered with AFP Factuel, the largest fact-checking organisation in the world and a member of Meta’s TPFC programme. We also recruited a journalist – an integral part of the fact-checking team but who was also responsible for collecting data for the project.

The fact-checking process starts with the identification of suspicious posts circulating potential fake news – the stories. During the morning editorial meetings, the fact-checkers decide collectively which story to cover based on the individual proposals. The journalist working with us collected information on all stories – fact-checked or not – as well as engagement on all public posts related to each story. For stories that were left aside, he also collected the reasons for doing so, based on discussions with the editor in chief (reasons such as the content was too complicated to check, it was probably true, or it was not viral enough). The journalist then wrote a fact-check on the stories that were selected, a process that is rather long (the median time being two days). The fact-check was subsequently published on the AFP website and the journalist could then rate posts related to the story directly on Facebook, choosing among several ratings (“false”, “partially false”, etc.). Facebook only pays partner organisations for the first rating, without providing direct incentives for extensive searches for suspicious posts.

Figure 1 Impact of being fact-checked on the circulation of stories on Facebook

Figure 1 Impact of being fact-checked on the circulation of stories on Facebook
Figure 1 Impact of being fact-checked on the circulation of stories on Facebook
Notes: The figure reports the results of the estimation of an event-study model. An observation is a story. Observations are weighted using weights derived from a propensity score matching procedure using as covariates engagement at consideration date and engagement 12 hours before consideration. The dependent variable is the logarithm of the cumulative number of engagements.
Source: Cagé et al. (2025).

Impact on circulation

This setting allows us to uniquely quantify the causal impact of fact-checking on the circulation of fake news based on the comparison between fact-checked stories and those left aside or, within stories, the comparison between rated and unrated posts (allowing us to control for specific story time-trends). To make this approach more convincing, we impose restrictions on the stories in the control group, based mostly on the reasons registered for not selecting the story.

Overall, we find that engagement with fact-checked stories decreases after the treatment (i.e. after the fact-check is published) compared to control stories (which are ultimately not considered by the fact-checkers), where date 0 corresponds to the time of the discussion of the stories in the editorial meetings (see Figure 1). On average, we observe an 8% decrease. This effect is entirely driven by stories rated “false”; there is no significant effect when a more nuanced label (such as “partially false”) is used. We also show that the speed of the fact-check matters: the observed drop is driven by the stories that are rated quickly (i.e. in less than the fact-checking median time). We also find an interesting dimension of heterogeneity according to the topic of the story: according to our estimates, fact-checking is much more effective for stories related to the war in Ukraine and much less so for environmental issues such as climate change, possibly because beliefs are less entrenched in the former case. We find similar effects when adopting a within-story approach, comparing rated and non-rated posts.

This significant causal effect of fact-checking on the circulation of misinformation could be due to either an enforcement channel, corresponding to the constraints imposed by Facebook on posts rated “false” (blurring and demoting of the content) and/or to a behavioural channel, corresponding to the reaction of the users. We provide strong evidence of the existence of the behavioural channel. First, we show in Figure 2 – using the previously described comparison between fact-checked and control stories – that users are twice as likely to delete posts related to fact-checked stories, showing an active response to the treatment. Our work also points to dynamic effects on the behaviour of accounts that had a post fact-checked in the past. We show that those accounts reduce their engagement on Facebook in the weeks that follow the fact-check. Moreover, they seem to be less likely to repost fake news. Specifically, we show that an account that had a post rated “false” takes much more time to reappear in a new story fact-checked by the AFP than one whose story was considered for fact-checking but ultimately not checked.

Figure 2 Impact of being fact-checked on the deletion of posts

Figure 2 Impact of being fact-checked on the deletion of posts
Figure 2 Impact of being fact-checked on the deletion of posts

Notes: The figure reports the results of the estimation of an event-study model. An observation is a story*time. Standard errors are clustered at the level of the stories. The dependent variable is the share of deleted posts at date t that were already present at date 0.
Source: Cagé et al. (2025).

An effective process that can be improved

Fact-checking has emerged as one of the most prominent policy tools to combat the spread of misinformation. However, it has faced persistent criticism from certain groups who argue that it infringes upon freedom of expression. This was notably the rhetoric invoked by Mark Zuckerberg when suspending the TFPC programme in the US. Others contend that fact-checking consumes substantial resources while delivering limited practical impact. Our findings speak to both of these critiques and offer insights for potential policy reforms.

First, we show that the impact of fact-checking does not primarily result from Facebook’s actions to hide or demote content, but rather from users’ behavioural responses, challenging the idea that it suppresses free speech.  Second, our paper shows that fact-checking significantly reduces engagement with rated posts and also decreases users’ subsequent activity.   Back-of-the-envelope calculations suggest an upper-bound cost of between €0.15 and €0.35 per removed engagement. This estimate is highly conservative as it does not account for longer-term or spillover effects, such as changes in user behaviour across other stories or social media platforms.

There are, however, several ways the fact-checking, in the form of a system like the Third-Party Fact-Checking programme, can be improved. First the selection of the content to be verified is still broadly reliant on individual monitoring strategies, in particular tracking specific groups, while the algorithmic detection provided by Facebook is used in less than one fifth of cases. Improving this tool and better structuring the search in terms of clearly defined objectives could yield high benefits. Second, the rating process itself could also be improved.  We find that around only half of the posts related to a story identified as “false” were actually rated – and this is likely a lower bound, given the potential for undetected related posts. Hence abandoning the Third-Party Fact-Checking programme does not appear justified, either on the grounds of free speech or efficacy of the system, though the programme could be significantly improved at low cost.

Source : VOXeu

Tags

Share this post:

Leave a Reply

Your email address will not be published. Required fields are marked *