Meta's recent decision to end its third-party fact-checking program has sparked widespread concern among experts who warn the move could lead to unchecked spread of misinformation and hate speech across its platforms.
The company announced it will replace its longstanding fact-checking partnerships, established in 2016, with a crowdsourced approach similar to X's Community Notes system. This shifts the burden of identifying false information to users across Facebook, Instagram, Threads, and WhatsApp.
"Most people do not want to have to wade through a bunch of misinformation on social media, fact checking everything for themselves," says Angie Drobnic Holan, director of the International Fact-Checking Network at Poynter. She notes the existing program effectively reduced the spread of hoaxes and conspiracy theories.
Meta CEO Mark Zuckerberg defended the decision in a video statement, claiming fact-checkers were "too politically biased" and suggesting the change promotes free speech. The company also cited concerns about over-enforcement, stating 10-20% of removed content in December may not have violated policies.
Critics view the move as politically motivated, pointing to Meta's recent appointment of Republican lobbyist Joel Kaplan as chief global affairs officer. The timing aligns with statements from President-elect Trump, who suggested the changes were "probably" responding to his influence.
Environmental groups and scientists express particular alarm about the potential proliferation of climate change denial. "Anti-scientific content will continue to proliferate on Meta platforms," warns Kate Cell of the Union of Concerned Scientists.
Civil rights advocates fear increased targeting of marginalized communities. "Meta is opening the door to unchecked hateful disinformation about already targeted communities like Black, brown, immigrant and trans people, which too often leads to offline violence," says Nicole Sugerman from nonprofit Kairos.
The policy shift marks a dramatic reversal from Meta's previous stance on content moderation and raises questions about social media's role in managing the spread of false information. As platforms continue to evolve their approach to content oversight, the impact of this decision on public discourse remains to be seen.