Meta is following X’s playbook on fact-checking. What it means for you

Facebook parent company Meta Platforms said Tuesday that it’s ending a third-party fact-checking program in the United States, a controversial move that will change how the social media giant combats misinformation.

Instead, Meta said it would lean on its users to write “community notes” on potentially misleading posts. Meta’s move toward crowd-sourcing its content moderation mirrors an approach taken by X, the social media platform owned by Elon Musk.

The decision by Meta sparked criticism from fact-checkers and advocacy groups, some of whom accused Chief Executive Mark Zuckerberg of trying to cozy up to President-elect Donald Trump. Trump has often lashed out at Facebook and other social media sites for what he has said are their biases against him and right-leaning points of view.

Zuckerberg, through Meta, is among a group of tech billionaires and companies who donated $1 million to Trump’s inaugural fund. This month, Meta also named Joel Kaplan, a prominent Republican lobbyist, as the new head of global policy. And Dana White, the chief executive of Ultimate Fighting Championship and a friend of Trump’s, is joining Meta’s board.

Content moderation on social media sites has become a political lightning rod with Republicans accusing Facebook and others of censoring conservative speech. Democrats, on the other hand, say these platforms aren’t doing enough to combat political misinformation and other harmful content.

Each day, more than 3 billion people use one of Meta’s services, which includes Facebook, Instagram and WhatsApp.

Here’s what you need to know about the decision:

How did Meta’s previous fact-checking program work?

Launched in 2016, Meta’s program included fact-checkers certified by the International Fact-Checking Network to identify and review potentially false information online. The Poynter Institute owns IFCN.

More than 90 organizations participate in Meta’s fact-checking program including Reuters, USA Today and PolitiFact. Through the service, publishers have helped fact-check content in more than 60 languages worldwide about a variety of topics including about COVID-19, elections and climate change.

“We don’t think a private company like Meta should be deciding what’s true or false, which is exactly why we have a global network of fact-checking partners who independently review and rate potential misinformation across Facebook, Instagram and WhatsApp,” Meta said in a post about the program.

If a fact-checker rated a post as false, Meta notified the user and added a warning label with a link to an article debunking its claims. Meta also limited the visibility of the post on its site.

What is Meta changing?

Under the new program, Facebook, Threads and Instagram users will be able to sign up to write “community notes” under posts that are potentially misleading or false. Users from a diverse range of perspectives would then reach an agreement on whether content is false, Kaplan said in a blog post.

He pointed to how X handles community notes as a guide to how Meta would handle questionable content. At X, users who sign up to be able to add notes about the accuracy of a post can also rate whether other notes were helpful or unhelpful. X evaluates how users have rated notes in the past to determine whether they represent diverse perspectives.

“If people who typically disagree in their ratings agree that a given note is helpful, it’s probably a good indicator the note is helpful to people from different points of view,” X’s community notes guide said.

Meta said it’s also lifting restrictions around content about certain hot-button political topics including gender identity and immigration — a decision that LGBTQ+ media advocacy group GLAAD said would make it easier to target LGBTQ+ people, women, immigrants and other marginalized groups for harassment and abuse online.

Separate from its fact-checking program, Meta employs content moderators who review posts for violations of the company’s rules against hateful conduct, child exploitation and other offenses. Zuckerberg said the company would move the team that conducts “U.S. based content review” from California to Texas.

Why is Meta making this change?

It depends on whom you ask.

Zuckerberg and Kaplan said they’re trying to promote free expression while reducing the number of mistakes by moderators that result in users getting their content demoted or removed, or users being locked out of their accounts.

“The recent elections also feel like a cultural tipping point towards, once again, prioritizing speech,” Zuckerberg said in an Instagram video announcing the changes. “So we’re gonna get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms.”

Under its old system, Meta pulled down millions of pieces of content every day in December, and it now estimates that 2 out of 10 of these actions might have been errors, Kaplan said in a blog post.

Zuckerberg acknowledged that the platform has to combat harmful content such as terrorism and child exploitation, but also accused governments and media outlets of pushing to censor more content because of motivations he described as “clearly political.”

Moving the content moderation teams to Texas, he said, will help build trust that their workers aren’t politically biased.

Advocacy groups, though, say tech billionaires like Zuckerberg are just forging more alliances with the Trump administration, which has the power to enact policies that could hinder their business growth.

Nora Benavidez, senior counsel and director of digital justice and civil rights at Free Press, said in a statement that content moderation “has never been a tool to repress free speech.”

“Meta’s new promise to scale back fact checking isn’t surprising — Zuckerberg is one of many billionaires who are cozying up to dangerous demagogues like Trump and pushing initiatives that favor their bottom lines at the expense of everything and everyone else,” she said in a statement.

Trump said in a news conference Tuesday that he thought Zuckerberg was “probably” responding to threats the president-elect had made to him in the past.

Trump has accused social media platforms such as Facebook, which temporarily suspended his accounts because of safety concerns after the Jan. 6 attack on the U.S. Capitol, of censoring him. He has previously said he wants to change Section 230, a law that shields platforms from liability for user-generated content, so platforms only qualify for immunity if the companies “meet high standards of neutrality, transparency, fairness and nondiscrimination.”

How have fact-checkers responded to the move?

Fact-checkers say that Meta’s move will make it harder for social media users to distinguish fact from fiction.

“This decision will hurt social media users who are looking for accurate, reliable information to make decisions about their everyday lives and interactions with friends and family,” said Angie Drobnic Holan, director of the International Fact-Checking Network.

She pushed back against allegations that fact-checkers have been politically biased, pointing out that they don’t remove or censor posts and they abide by a nonpartisan code of principles.

“It’s unfortunate that this decision comes in the wake of extreme political pressure from a new administration and its supporters,” she said. “Fact-checkers have not been biased in their work — that attack line comes from those who feel they should be able to exaggerate and lie without rebuttal or contradiction.”

Times reporter Faith Pinho contributed to this report.

Source link

Leave a Comment