Share this...

by Larry Magid

There was big news from Paris this week where 18 countries and numerous companies, including Facebook, Google, Amazdivon and Twitter, signed the Christchurch Call, pledging to work together “to eliminate terrorist and violent extremist content online.” The document, which was not signed by the United States, says that “respect for freedom of expression is fundamental.

Signed two months after the horrific terrorist attack on a mosque in Christchurch New Zealand, the effort was led by New Zealand Prime Minister, Jacinda Ardern, and French President, Emmanuel Macron.

The non-binding “call,” was more of a pledge than a mandate and it’s aspirational, not regulatory. But it’s an example of how governments and big tech companies can come together to agree on how to solve a thorny and sometimes controversial issue.

As an internet safety advocate (I’m CEO of ConnectSafely.org, which receives support from tech companies including Google and Facebook), I agree with the Christchurch Call’s efforts to curtail violent extremism and I’m also good with Facebook terminating the accounts of known violent, racist, hateful, anti-semetic, Islamophobic individuals including Alex Jones and Louis Farrakhan. But as a journalist and an ACLU member, I’m also a strong advocate of free speech and worried about any efforts to limit it, even when that speech is unpleasant or downright vile.  Battle over speech issues are almost always about topics that make at least some people very uncomfortable. Courts are sometimes called on to rule over issues like pornography, hate speech and even speech that could possibly lead to violence, but I have yet to hear about a court case regarding cat videos, however annoying they may be to some people.

Private vs. government rules

Having said that, I’m OK with voluntary guidelines when it comes to certain types of online content, especially if it incites or celebrates violence as long as it’s at the discretion of the platforms that host the content.  But I admit that it can be a slippery slope, and when governments get involved, it’s important to limit suppression of speech that is clearly associated with real-world dangers such as child pornography and yelling fire in a crowded theater.

The key issue here is the role of government. In the United States, our precious First Amendment applies to government, not the private sector. You have the right to say almost anything you want if you’re standing on the sidewalk in front of my house, but if you start spouting racist, sexist, homophobic or anti-Semitic propaganda in my living room, I have the right to show you the door. Companies like Facebook and Google also have that right and – one could argue – responsibility to protect their communities from vile content.

It’s also important to note that the U.S. First Amendment doesn’t apply to other countries. As much as we may not like it, Americans can’t prevent countries like China, Iran, Saudi Arabia and Russia from censoring what their citizens can say, read, hear, view or access online. Even our more liberal European allies that do allow for free expression have laws that might not make it past our Supreme Court, such as banning Nazi paraphernalia and propaganda. Some of the symbols on display at the white nationalists Charlottesville march would have been prohibited in Germany and some other European countries which experienced the horror of Nazi rule.

Because it’s voluntary, I think the U.S should have signed the Christchurch Call, but I understand the principal behind the Trump administration’s decision not to sign. It’s hard to argue against the administration’s statement that “the best tool to defeat terrorist speech is productive speech.” Counter speech has long been the recommended antidote for hate speech and, in some ways, I’m impressed that someone in the White House understands that.

Facebook’s new ‘one-strike’ policy

I’m not sure if it was a coincidence, but the Christchurch Call came the same day that Facebook announced it would institute a “one strike policy,” temporarily blocking anyone from Facebook Live (its live streaming feature) if they share a link to a statement from a terrorist group with no context.  Like the Christchurch Call, this decision by Facebook is in the wake of the massacre at the Christchurch mosque where the gunman live-streamed his murderous attack. It took several minutes before Facebook was able to stop the feed and remove the content from its site, but there continued to be links to the content.

The decision by Facebook and other tech companies to sign the Christchurch Call along with actions by Facebook such as suspending people who share certain links or its announcement Thursday that it diffused an Israeli-based campaign to disrupt elections mostly in Sub-Saharan Africa, is a sign that the social media behemoth is waking up and taking its responsibilities seriously. There is plenty of reason to be skeptical of a company that has had so many mishaps on so many fronts, but there is also reason to be glad that it appears to be moving in the right direction.


Share this...