Zuckerberg justifies the termination of the fact-checking program by emphasizing the need for more free speech and less censorship on Meta’s platforms. This decision marks a surprising shift from Meta’s previous stance, when it praised the program’s success to the EU Parliament just months ago. Back then, Meta pointed out that users appreciated flagged content and that the program helped reduce clicks on misinformation. Now, however, Zuckerberg has criticized fact-checkers, accusing them of bias and using censorship to promote certain agendas.
MediaFutures center leader Christoph Trattner thinks it is shocking to see that one of the biggest social media platform providers does not see the necessity to provide proper content verification in an age where it has never been easier to artificially generate and spread mis- and disinformation on a massive scale.
“Rather than giving up digital content moderation efforts, we have to invest even more in fact-checking tools and help users to navigate political news in our challenging time”, Trattner says.
We have consulted our partners from Faktisk.no, BBC Verify and the Media Cluster Norway to analyse the situation.
The fact-checking organisation Faktisk.no challenges the core premise of Zuckerberg’s reasoning, emphasizing that fact-checking serves to inform, not censor.
“The development is concerning but not surprising. In the international fact-checking community, there has been uncertainty about Meta’s collaboration with fact-checkers for several years. Now, those concerns have become a reality—at least in the United States”, writes chief editor of Faktisk.no, Stian Eisenträger in his comment.
Faktisk.no is concerned about how the decision could harm transparency and hinder the fight against misinformation, particularly for smaller organizations dependent on Meta’s tools and funding.
“Zuckerberg’s framing of fact-checking plays into harmful narratives that undermine the credibility of independent journalism”, Eisenträger writes.
Free speech should coexist with protections against false or harmful content
Across the pond, where our partner BBC Verify does their job to prevent the spread of misinformation, senior journalist Kayleen Devlin disagrees with Zuckerberg’s stance on removing moderation and fact-checking.
“As journalists, we both witness and hear about some of the real impacts that misinformation and hate speech can have on society. Here in the UK, it was misinformation around the identity of the attacker in Southport that led to disruptive riots. And during the Covid pandemic, we heard how misinformation fuelled vaccine hesitancy. These are just a couple of examples that exist within an entire tapestry of them”, she says.

Devlin thinks that valuing free speech does not have to mean dismantling systems that detect and tackle untrue and potentially harmful speech.
“Back in August, Meta got rid of the tool Crowdtangle, which helped us track the spread of misinformation on its platforms. While there have been some great individual instances in which a community notes style approach has been helpful in pointing out mistruths, it’s also an approach that is subject to partisan bias”, Devlin adds.
This is how it works
To be a fact-checker for Meta, you need to be a member of international fact-checking organisations such as IFCN and EFCSN. These organizations require them to follow strict principles to ensure independence and accuracy, including making all fact-checks publicly verifiable and maintaining political neutrality.
Faktisk.no has been part of the Third-Party Fact-Checking Program since 2018. Until now, Meta has provided fact-checking organisations with access to tools like the Meta Content Library, which enables an overview of public posts on their platforms.
The fact-checkers were allowed to link their investigations to already published content on Facebook, Instagram and Threads and paid them a compensation. Content that has been identified as misinformation got decreased outreach. Fact-checked posts which contained wrong information were also labelled as “checked by independent fact-checkers”.
However, after the pandemic, the use of Meta’s fact-checking tools has significantly decreased. Today, Faktisk mainly marks fraudulent ads and clearly manipulated or fake images. In 2023, they flagged seven social media posts as misinformation, four of which involved the misuse of celebrities and media brands in fake cryptocurrency ads. Last year, there were only two such posts.
The current agreement Faktisk.no has with Meta is valid until January 2026. In light of recent developments, Faktisk.no will now conduct a thorough assessment of whether the collaboration agreement should be renewed.
Using new technology to fill gaps
Helge O. Svela, CEO of the Media Cluster in Bergen, thinks that these changes will undoubtedly increase the amount of mis- and disinformation on Meta platforms.
“I question whether this will be a positive change for most users. Look at what happened to Twitter/X when moderation was scaled back — there was noticeable user dissatisfaction, and some people left the platform altogether”, Svela says.
Svela is coordinating Project Reynir which fights against the noise that artificial intelligence creates in the form of fake images, videos, and text. By marking the origin of digital material in its metadata with C2PA technology from Project Origin, Project Reynir functions as a fact-checking tool already embraced by the New York Times, BBC, Google, and Microsoft. However, Meta is not part of the cooperation.

Another powerful tool against mis- and disinformation comes from our partner, Factiverse. Their AI and NLP-based fact-checking technology helps readers assess content reliability and guides them to the most trustworthy sources on topics they seek more information about.
By integrating Factiverse’s technology, social media platforms like Meta can enhance community-driven solutions, such as community notes, making them more effective and reliable in protecting users from misinformation.
Factiverse believes that freedom of expression is essential for online platforms, but this freedom is most meaningful when users have the knowledge and context to distinguish fact from fiction.
“Our tools are not designed to dictate what’s true but to empower users by highlighting credible information. It’s not about limiting voices—it’s about elevating truth.”
Title image: Screenshot from Mark Zuckerberg announcing the decision. More Speech and Fewer Mistakes | Meta
Image 1: Kayleen Devling at Annual Meeting 2024. Taken by Silje Katrine Robinson
Image 2: Center leader Christoph Trattner (left) and CEO of Media Cluster Norway Helge O. Svela.