Gatekeepers or Censors? How Tech Manages Online SpeechApple, Google and Facebook this week erased from their services many — but not all — videos, podcasts and posts from the right-wing conspiracy theorist Alex Jones and his Infowars site. And Twitter left Mr. Jones’s posts untouched.
The differing approaches to Mr. Jones exposed how unevenly tech companies enforce their rules on hate speech and offensive content. There are only a few cases in which the companies appear to consistently apply their policies, such as their ban on child pornography and instances in which the law required them to remove content, like Nazi imagery in Germany.
When left to make their own decisions, the tech companies often struggle with their roles as the arbiters of speech and leave false information, upset users and confusing decisions in their wake. Here is a look at what the companies, which control the world’s most popular public forums, allow and ban.
Facebook at the Center of the Storm
Of all the tech companies, Facebook has faced the biggest public outcry over what it allows on its platform.
Whenever the social media company has been pressed to explain its decision-making, it has referred to its community standards, a public document that outlines Facebook’s rules for users. The company has outright bans against violent content, nudity and terrorist recruitment propaganda. The rules on other types of content, including hate speech and false news, are more ambiguous.
When asked about Infowars last month, Facebook’s chief executive, Mark Zuckerberg, said he wouldn’t remove pages hosting popular conspiracy theories of the type Mr. Jones is known for sharing. Mr. Zuckerberg then turned the conversation to the subject of the Holocaust, defending Facebook users who deny the Holocaust occurred.
His awkward explanation prompted outrage, and less than a day later, Mr. Zuckerberg offered a public apology.
Now, less than a month later, Facebook has banned Mr. Jones and removed four pages belonging to him — including one with nearly 1.7 million followers — for violating its policies. The ban means that while Mr. Jones still has an account and can view content on Facebook, he is suspended from posting anything to the platform, including to his personal page or any pages on which he is an administrator.
In a post, Facebook said it banned Mr. Jones and his pages for “accumulating too many strikes.”
The company has refused to say how many strikes is too many, however. It has also not answered questions on how long Mr. Jones will be banned or whether Facebook will be reviewing similar content posted by other right-wing conspiracy theorists.
It’s unclear whether the actions Facebook has taken against Mr. Jones signal a new approach by the company against hate speech or whether they are, once again, responding to an isolated case because of public pressure.
— Sheera Frenkel
Google’s Wide Gray Area
Of all the major online services, Google’s YouTube is probably the most explicit about what is and is not allowed. But even with its published “Community Guidelines,” YouTube has wrestled with the subjective interpretation of those rules.
Users can flag videos that they believe violate those guidelines, which include bans on videos with nudity or sexual content or incite violence. YouTube will then review those flagged videos for potential violations. In addition, YouTube’s computer systems also comb the site for videos that violate its rules.
But many videos operate in a gray area. Even in YouTube’s own explanation of “hateful content,” the company calls it is a “delicate balancing act” between free expression and protecting YouTube users.
YouTube still hosts videos of Ahmad Musa Jibril, an Islamic cleric from Dearborn, Mich., whose sermons were viewed by one of the knife-wielding attackers in the terror attack on London Bridge last year. His sermons posed a quandary for the video service because the cleric does not directly call for violent jihad and is, therefore, not in clear violation of community guidelines. YouTube now presents Mr. Jibril’s sermons behind a warning that the video has been deemed “inappropriate or offensive” by part of the YouTube community.
Mr. Jones incurred two content violations from YouTube over the last year. In February, YouTube said he had violated its policies regarding harassment and bullying when a video claiming that David Hogg, one of the outspoken student survivors of the school shooting in Parkland, Fla., was a “crisis actor.”
In Mr. Jones’s most recent violation last month, YouTube took down four of his videos that included hate speech against Muslim and transgender people as well as footage of a child being shoved to the ground. YouTube said the videos had violated its policies pertaining to hate speech, harassment and child endangerment.
— Daisuke Wakabayashi
Twitter, the ‘Free Speech Wing of the Free Speech Party’
Twitter has been more permissive of controversial content than its social media peers, with executives calling it “the free speech wing of the free speech party.” While Facebook removes nude or gory images, Twitter is more tolerant of adult and violent content. Rather than deleting these kinds of images, Twitter tends to hide them behind warnings that require users to click through before they can see the content.
Twitter’s approach has provoked plenty of criticism, particularly around its lax handling of harassment. Celebrities like the actress Leslie Jones have been temporarily driven off the platform by swarms of abusers.
Jack Dorsey, the company’s chief executive, has said that the company needed to do better at policing trolls. In December, Twitter said it would promote “healthy conversation” by using a combination of human moderation and machine learning to detect trolls and minimize the appearance of their posts on the platform.
Although the parents of several Sandy Hook shooting victims are suing Mr. Jones for defamation, a Twitter spokesman said that neither Mr. Jones’s personal account nor his Infowars account are currently in violation of Twitter’s policies. Tweets questioning the school shooting in Sandy Hook, Conn., remain live on both accounts.
The simultaneous takedowns across YouTube, Spotify and Facebook are troubling, said Kevin Bankston, the director of the Open Technology Institute at New America, a nonpartisan research organization in Washington. The number of bans in quick succession from multiple companies raised the specter that this could have been influenced by outside political pressure rather than a straightforward application of company policies, he said.
Twitter’s decision to allow Mr. Jones and Infowars to stay on its platform may reflect a commitment to consistent policy enforcement, Mr. Bankston said.
“A Twitter that’s not accountable to its own rules is not accountable to anybody,” he said.
— Kate Conger
Apple: ‘I’ll Know It When I See It’
Without a social-media platform, Apple typically avoids the content controversies that ensnare its peers. Yet the iPhone maker still makes many decisions about what apps, podcasts, songs and videos it will make available on its popular services.
Apple on Sunday banned five of the six Infowars podcasts from its podcasts service. Apple determined the sixth podcast, RealNews with David Knight, did not violate its policies, which prohibit podcasts that “could be construed as racist, misogynist, or homophobic” or that depict “graphic sex, violence, gore, illegal drugs, or hate themes.” In the past, Apple has also removed neo-Nazi songs and the Nazi anthem from iTunes.
Apple’s decision to ban the Infowars podcasts was surprising partly because an app that Infowars introduced last month was gaining steam on Apple’s App Store. From July 12 through Monday, the Infowars app was, on average, the 33rd most popular news app on the app store, according to App Annie, an app analytics firm. On Tuesday, after news of Mr. Jones’s bans spread, the Infowars app was Apple’s fourth most popular news app, outranking every mainstream news organization.
Apple reviews all apps that apply for its app store and determined that the Infowars app did not violates its rules. Apple posts extensive policies for apps that it distributes, including prohibitions on “content that is offensive, insensitive, upsetting, intended to disgust, or in exceptionally poor taste.” The policies give a series of examples, including content that is defamatory, discriminatory, meanspirited, overtly sexual, encourages violence or includes “realistic portrayals of people or animals being killed, maimed, tortured, or abused.”
How Apple decides which apps violate those policies is more vague, however. Apple said in its policy: “We will reject apps for any content or behavior that we believe is over the line. What line, you ask? Well, as a Supreme Court Justice once said, ‘I’ll know it when I see it.’ And we think that you will also know it when you cross it.”
— Jack Nicas