Facebook Expands Definition of Terrorist Organizations to Limit Extremism

Josh Edelson/Agence France-Presse — Getty Images

Facebook unveiled a series of changes on Tuesday to limit hate speech and extremism on its site, as scrutiny is rising on how the social network may be radicalizing people.

The company began its announcements early on Tuesday by saying it would expand its definition of terrorist organizations, adding that it planned to deploy artificial intelligence to better spot and block live videos of shootings. Hours later, in a letter to the chairman of a House panel, Facebook said it would prevent links from the fringe sites 8chan and 4chan from being posted on its platform. And late in the day, it detailed how it would develop an oversight board of 11 members to review and oversee content decisions.

Facebook, based in Silicon Valley, revealed the changes a day before the Senate Commerce Committee will question the company, Google and Twitter on Capitol Hill about how they handle violent content. The issue of online extremism has increasingly flared up among lawmakers, with the House Judiciary Committee holding a hearing in April about the rise of white nationalism and the role that tech platforms have played in spreading hate speech. On Tuesday, a bipartisan group of congressmen also sent a letter to Twitter, Facebook and YouTube about the presence of international terrorist organizations on the sites and how those groups foment hate.

Facebook in particular has been under intense pressure to limit the spread of hate messages, pictures and videos through its site and apps. As the world’s largest social network, with more than two billion users, as well as owner of the photo-sharing site Instagram and the messaging service WhatsApp, Facebook has the scale and audience for violent content to proliferate quickly and globally.

That has been brought home in recent mass shootings where Facebook was used to distribute violent messages. In March, the social network faced harsh criticism for not detecting and removing the live video of the killings of 51 people in Christchurch, New Zealand. And in shootings in the United States this year, including last month in El Paso, the plans were announced in advance on 8chan and then advanced through other social media, including on Facebook.

“None of these changes are silver bullets,” Brian Fishman, director of Facebook’s dangerous organizations and individuals policy, said on Twitter. “There’s still tons of work to do.” But, he added, “there’s a lot of progress under the hood and we wanted to provide insight into some of that work.”

Some experts who study extremism online welcomed Facebook’s expanded effort, especially the broader definition of terrorism. But they emphasized that the plan’s effectiveness would depend on the details — where Facebook draws the line in practice, and how the company reports on its own work.

“It’s incredibly difficult to know exactly how these updates will play out in action, and oftentimes in the past, we’ve seen that the reality doesn’t match the initial announcement,” said Becca Lewis, a researcher at Stanford who studies extremist groups.

She added that Facebook would have to be comfortable with fewer people consuming content as it made these changes. “This is much tougher, in part because it would require social media platforms to grapple with their business models more fully,” she said.

Evelyn Douek, a doctoral student at Harvard Law School who studies online speech legislation worldwide, said she was looking to future transparency reports that Facebook provides that will include data on extremist content, to see whether the changes make a difference.

“A lot of these reports can be ‘transparency theater’ where they give information and statistics, but without enough context or information to make them meaningful,” she said. Though the announcements are promising, she added, “I’ll withhold judgment until I actually see how they do it.”

Facebook has long played up its ability to catch terrorism-related content. In the last two years, the company said, it has detected and deleted 99 percent of extremist posts — about 26 million pieces of content — before they were reported to it.

But Facebook said it had mostly focused on identifying organizations like separatists, Islamist militants and white supremacists. It said it would now consider people and organizations that engaged in attempts at violence toward civilians as terrorists, as opposed to its old way of defining terrorism by focusing on violent acts intended to achieve political or ideological goals.

The team leading its work to counter extremism on its site has grown to 350 people, Facebook added, and includes experts in law enforcement, national security and counterterrorism and academics studying radicalization.

To identify more content relating to real-world harm, Facebook said it was updating its artificial intelligence to better catch first-person shooting videos. The company said it was working with American and British law enforcement authorities to obtain camera footage from their firearms training programs to help its A.I. learn what real, first-person violent events look like.

To divert people away from extremist content, Facebook said, it is expanding a program that redirects users searching for such posts to resources intended to help them leave hate groups behind. Since March, the company has channeled people who search for terms associated with white supremacy to resources like Life After Hate, an organization that provides crisis intervention and outreach. Facebook said that people in Australia and Indonesia would now be rerouted to the organizations EXIT Australia and ruangobrol.id.

In a letter on Tuesday to Representative Max Rose of New York, the chairman of the subcommittee on intelligence and counterterrorism of the House Committee on Homeland Security, Facebook also said it was “blocking links to places on 8chan and 4chan that are dedicated to the distribution of vile content.” That includes all content from 8chan’s notorious /Pol board, a page known for trafficking in violent, racist speech.

That site has been offline since after the El Paso shooting, in which 22 people were killed. Fredrick Brennan, one of the 8chan’s founders, had said after the shooting that the site should be shut down. The owner of 8chan, Jim Watkins, testified before lawmakers in a closed-door hearing this month.

“We’ve seen terrorists post 8chan links to Facebook in an effort to bring widespread attention to mass shootings, which is why I’m encouraged to see Facebook’s willingness to work with me and ban those links,” Mr. Rose said. “We all need to do more to combat the spread of terrorism and keep our communities safe — Congress, tech companies, everyone.”

Inside Facebook, the company has additionally been developing an oversight board — colloquially referred to by outsiders as the Facebook Supreme Court — for more than a year. The company said on Tuesday that the board would be made up of a “diverse” set of experts, each serving for a three-year term, with a maximum of three terms of service.

Members will oversee and interpret how Facebook’s existing community standards are enforced by its content moderators, can instruct Facebook to allow or remove content, and will be asked to uphold or reverse designations on content removals. Members will also issue “prompt” written explanations for any decisions.

“Building institutions that protect free expression and online communities is important for the future of the internet,” Facebook’s chief executive, Mark Zuckerberg, said in a statement. “We expect the board will only hear a small number of cases at first, but over time we hope it will expand its scope and potentially include more companies across the industry as well.”

No comments

Powered by Blogger.