With Fact-checks, Twitter Takes on a New Kind of Task

Twitter introduces fact-checking labels for tweets

In addition to disputing misleading claims made by U.S. President Donald Trump about mail-in ballots this week, Twitter has added fact-checking labels to thousands of other tweets since introducing the alerts earlier this month, mostly on posts about the coronavirus.

The company does not expect to need additional staff for the undertaking, Twitter spokeswoman Liz Kelley said on Saturday. Nor is it partnering with independent fact-checking organizations, as Facebook and Google have, to outsource the debunking of viral posts flagged by users.

Social media platforms have been under fierce scrutiny over how they police rapidly spreading false information and other types of abusive content since Russia exploited the networks to interfere in the 2016 U.S. presidential election.

Get the Facts

Donald Trump
Twitter

Fact-checking groups said they welcomed Twitter's new approach, which adds a "get the facts" tag linking to more information, but said they hoped the company would more clearly lay out its methodology and reasoning.

On Friday, Chief Executive Jack Dorsey acknowledged the criticism, saying he agreed fact-checking "should be open source and thus verifiable by everyone." In a separate tweet, Dorsey said more transparency from the company was "critical."

Separating From Larger Competitors

The company's move to label Trump's claims about mail-in ballots separates it from larger competitors such as Facebook, which declares its neutrality by leaving fact-check decisions to third-party partners and exempts politicians' posts from review.

"To a degree, fact-checking is subjective. It's subjective in what you pick to check, and it's subjective in how you rate something," said Aaron Sharockman, executive director of U.S. fact-checking site PolitiFact, who said Twitter's process was opaque.

Team Continuing to Expand the Effort to Include Other Topics

donald trump tweet flagged

Twitter telegraphed in May that its new policy of adding fact-checking labels to disputed or misleading coronavirus information would be expanded to other topics. It said this week - after tagging Trump's tweets - that it was now labeling misleading content related to election integrity. Twitter's Kelley said the team is continuing to expand the effort to include other topics, prioritizing claims that could cause people immediate harm.

Trust and Safety Division Tasked With the 'leg-work' on Such Labels

A Twitter spokesman said the company's Trust and Safety division is tasked with the "leg-work" on such labels, but declined to give the team's size. This week, Twitter defended one of these employees after he was blasted as politically biased by Trump and his supporters over 2017 tweets.

Twitter also drew Trump's ire for putting a warning over his tweet about protests in Minnesota over the police killing of a black man for "glorifying violence," an enactment of a 2019 policy that was long-awaited by the site's critics. In the tweet, Trump warned the mostly African-American protesters that "when the looting starts, the shooting starts," a phrase used during the civil rights era to justify police violence against demonstrators. Facebook did not take action on the same post.

Decisions on the Labels Made by a Team of Executives

twitter
Twitter Pixabay

The Twitter spokesman said decisions on the labels are made by a team of executives, including Sean Edgett, Twitter's general counsel, and Del Harvey, the vice president of Trust and Safety. Chief Executive Officer Jack Dorsey is informed before actions are taken.

The company's curation team aggregates tweets on the disputed claims and writes a summary for a landing page. The team, which includes former journalists, normally pulls together content in categories including Trending, News, Entertainment, Sports and Fun. Twitter, whose executives at one time referred to it as "the free speech wing of the free speech party," has been tightening content policies for several years after recognizing that abuses had grown rampant.

Coping With Fake News and Abuse

Dorsey met privately with academics and senior journalists shortly after the 2016 U.S. election, which former New York Times editor Bill Keller, who attended one meeting, called an "ahead-of-the-pack effort" to cope with fake news and abuse. Critics say the company was slow to act after that, but it has accelerated its efforts in the last year.

In March, it debuted its "manipulated media" label on a video of Joe Biden, the presumptive Democratic presidential nominee to take on Trump in the Nov. 3 election, posted by the White House social media director.

Smaller Teams, Bigger Goals

Twitter's content review operation is small relative to its peers, with about 1,500 people. Facebook has about 35,000 people working on "safety and security," including 15,000 moderators, most of them contractors, although it also dwarfs Twitter in size: 2.4 billion daily users compared to Twitter's 166 million.

From January to June last year, Twitter said the company took actions on 1,254,226 accounts for violating its content rules. Twitter does work with independent organizations on content issues, but fact-checking groups, some of them paid by Facebook, told Reuters they wanted more dialogue with Twitter about its new steps.

READ MORE