The global free speech experiment for participants of all ages

By Anne Collier

We don’t hear about it much, but an important, historically unprecedented experiment is being conducted in Internet-connected schools, libraries, homes and workplaces in every country under every sort of government on the planet. It’s about how to protect people and their right of free expression – e.g., children and other protected classes – at the same time in social media. It’s unprecedented because this is a medium that can’t be regulated in any traditional way because it is global and grassroots (increasingly user-produced) and embraces about as many perspectives as there are people using it.

[I mentioned schools. I should qualify that. Because many schools still block social media, they can’t be the communities of guided practice that they ideally are for young social media users. Students can actively participate in this discussion – consciously practice both exercising free speech and respecting others’ right to exercise it – only when their schools support the use of social media in the classroom (e.g., blogs, wikis, Twitter, Google Docs, virtual worlds, apps, Facebook, etc.). Citizenship is best learned through practice rather than in the abstract, so I hope more and more schools will not only allow their students to join and participate in the discussion but also work with them as key participants in it in their school communities.]

“The most significant free-speech debates today … take place online,” writes George Washington University law professor Jeffrey Rosen in the New Republic, in places like Twitter, Facebook and YouTube which get requests every day from all over the planet to delete content that from one perspective is considered harmful and from another often considered a protest or parody or “controversial humor.” In this social media environment, Rosen writes, “the risks that overregulation will open the door to suppression of political expression are exponentially higher than in the offline world.” He doesn’t just point to China, Russia and Iran but also to the possibility of over-regulation from Europe: “Because of its historical experience with fascism and communism, Europe sees the suppression of hate speech as a way of promoting democracy.” He seems to be saying that Europe in effect thinks democracy means decency more than free speech, but “decency” defined by who? European democracies are “contemplating broad new laws that would require Internet companies to remove posts that offend the dignity of an individual, group, or religion.”

Whose rights determined by whom?

That sounds nice, right? But think about it: Who decides for the planet? A government? Rosen cites the example of Turkey, where it’s illegal to insult the country’s founder Kemal Ataturk. But Greek football fans did just that in some YouTube videos, and the Turkish government demanded that YouTube take them down. YouTube did block access to the videos in Turkey but not the whole world, so the Turkish government blocked its citizens’ access to YouTube for two years. The point, here, is not that few people understand the significance of other countries’ laws or cultural norms; it’s that few governments understand that a global medium is not designed to preserve the order in a single country or the rights of a single citizenry. For that matter, few users understand, or at least think about, this either. Also, sometimes we get to the social good by working through things democratically, with all kinds of people exercising their free-speech rights, often completely disagreeing with one another. [Likewise at school, students learn digital citizenship by going through this process in digital environments with the unique, focused guidance that school ideally provides.]

Not that “social media” was designed with any of this – including Turkish laws – in mind. But here we are, figuring all this out in user-generated social media now and, because media are global and social-media services are too, it falls to media companies who care about the issue to protect free speech. “Given their tremendous size and importance as platforms for free speech,” Rosen writes, “companies like Facebook, Google, Yahoo, and Twitter shouldn’t try to be guardians of what [Oxford and New York Universities professor Jeremy] Waldron calls a ‘well-ordered society’; instead, they should consider themselves the modern version of Oliver Wendell Holmes’s fractious marketplace of ideas – democratic spaces where all values, including civility norms, are always open for debate.”

Social media’s ‘deciders’

A fascinating part of this story is about the teams of content policy people at Twitter, Facebook, YouTube and Yahoo nicknamed “The Deciders.” Rosen describes the work of the Anti-Cyberhate Working Group they formed to work through the complexities of content policy – and how to make decisions fairly, efficiently and consistently under ridiculous conditions. By ridiculous, I mean, for example, that people upload 300 million photos and 2.5 billion messages to Facebook per day, NPR recently reported, and I just learned from Twitter that its users send 1 billion tweets every 2.5 days.

But there’s no uniformity in these companies’ approaches to content. Facebook’s resembles workplace norms and policies, according to NPR – “the kinds of rules that govern what you can say to colleagues at lunch” – while Twitter “calls itself the free speech wing of the free speech party and models its approach on the US Constitution.” Twitter, Rosen says, “wants to be a platform for democracy rather than civility.”

NPR cites him as saying that – by deleting content based on their community standards (usually called Terms of Service) – companies are “judging what is and isn’t offensive,” and they shouldn’t be in the business of doing that. Others argue, of course, that they should because, after all, they’re just companies, not protectors of free speech. It’s not that Rosen doesn’t agree with the latter but these digital spaces now represent the global “marketplace of ideas” to an unprecedented degree. According to NPR, he says “Facebook has every right to determine for itself what speech to allow and what to ban.” It’s just that he “hopes the company will preserve at least the possibility for anonymous actors to say politically controversial – even occasionally offensive – things online.”

The child-protection challenge

Things get even more complicated when children occupy the same spaces as adults. It’s even more difficult to protect minors and free speech simultaneously in global social media, and US courts have been unprecedentedly challenged by legislation such as the Communications Decency Act (CDA) of 1996, the Child Online Protection Act (COPA) of 1998, and the Children’s Internet Protection Act (CIPA) of 2000 to name only a few federal laws they’ve struggled with. The giant social media services were not designed for children under 13 (though nobody’s completely sure of the origins of that minimum age for the US’s Children’s Online Privacy Protection Act [COPPA], passed before there were social media), and there are millions of U13s in social media services, most of them with their parents’ knowledge or even help (see this) – logically because their children’s experiences in social media are largely positive.

What the US courts and four national task forces on child online protection, including two I’ve had the honor of serving on, seem to keep arriving at is that, in a global, multicultural medium, the child end of the protecting-free-speech-and-children equation is best covered through education in homes and schools and a large diverse set of protection tools chosen and implemented by the adults who know what’s best for them, child by child – because we’re slowly, collectively coming to understand four other things about social media:

  • how individual social-media use is
  • how embedded it is in, not separate from, our everyday lives (the actual context of what happens in social-media services, not the sites themselves)
  • how – because the content is both individually and socially produced by the users of these media companies – privacy, safety and security are also social (maintained and shared by users as much as the services) as well as personal
  • how fluid social-media use is (users can move on, set up accounts and post elsewhere if they don’t like restrictions placed on them, so regulation has an impact on companies, not so much content or users)
  • how safety and privacy in social media are collective as well as individual and are maintained in a distributed way by all parties concerned (users for themselves, peers for each other, their online and offline communities, companies and governments).

Self, peer & community protection

What all of that says to me – and I think to anybody who’s been observing this experiment for a decade and a half – is that, where we’re talking about protecting people rather than free speech, truly effective protection can no longer come only from giant, impersonal entities such as governments and global companies. Protection of children in particular (because they’re participants and producers, constantly changing and in a highly experimental phase of their lives) works from the inside out. It’s best developed with the people closest to them, from parents to other family members to people who care about them at school and on out through the concentric circles, and it’s a learning process. It must involve children themselves, because internal factors such as resilience, mindfulness or critical thinking, and a moral compass or inner guidance system are critical.

And so the experiment continues, hopefully with our children’s participation, as we all find ourselves revisiting what democracy and free speech mean, now, in lives that happen in global, social digital spaces too.

Related links

Disclosure: As co-director of, I’m a member of Facebook’s Safety Advisory Board.

Leave a comment