by Larry Magid
This post first appeared in the Mercury News
Listen to “Facebook’s moderators can’t please everyone” on Spreaker.
Listen to Larry Magid’s ConnectSafely Report for CBS News on Facebook moderation
One of my favorite phrases is “it’s complicated,” because it expresses the fact that many things in life are not as simple as they seem.
That’s especially true when you need to make an important decision. Should you buy or lease your next car – there are advantages and disadvantages to both options. Should a doctor prescribe a particular medicine, having to weigh the benefits against the risks and possible side effects. The list of nuanced decisions is long, ranging from what you should have for dinner to whether to have an operation that could cure you or kill you.
In addition to nuances, there are what I call competing rights. Policymakers and courts are constantly having to make decisions often despite disagreements among each other and their constituencies. Most decisions are binary – yes or no, candidate one vs candidate two, guilty or not-guilty, etc. And, though there may be pros and cons that decision makers should consider, there are often advocates on both sides who are convinced that their position is the only one that makes sense. That’s one of the reasons we have a Supreme Court to, rightly or wrongly, decide on issues that often deeply divide us as a nation.
Nuances and competing rights in social media
Content moderation on social media platforms is one area where nuances and competing rights play a big role. Almost all social media companies say they want to encourage free speech, yet most have community standards or terms of service that limit speech that they deem harmful. There are people pressuring Facebook and Twitter to take down all sorts of speech that they consider to be demeaning, defaming, vulgar, dangerous, misleading or hateful. But when these companies delete such posts or suspend those who repeatedly post them, they are accused of censorship.
Years ago, social media companies had to struggle over the question of whether to allow beheading videos or videos of animal abuse. At first glance, the obvious answer was no. But there were human rights and animal rights advocates who said the videos should remain, to promote public outrage over horrific crimes. In most cases, the eventual decision was to remove these videos, but there was legitimate debate by well-meaning people with differing points of view.
There’s a lot of debate over the suspension of Donald Trump after the January 6th insurrection. Facebook suspended Trump for at least two years and Twitter suspended him permanently for posting statements that they said encouraged violence and failed to help facilitate a lawful and orderly transition of power. In the opinion of these companies, it was a matter of public safety. Facebook’s independent oversight board ordered it to reconsider Trump’s suspension after two years which led Facebook to “extend the restriction for a set period of time and continue to re-evaluate until that risk (of violence) has receded,” said Facebook spokesperson Nick Clegg.
Trump is one of several people on the right who have been removed from these platforms for alleged dangerous speech, and there have been many conservatives who claim that it’s a reflection of political bias from liberal Bay Area tech companies. Florida passed a law “to spot censorship of Floridians by Big Tech” because, as Lieutenant Governor Jeannete Nunez claimed, “What we’ve been seeing across the U.S. is an effort to silence, intimidate, and wipe out dissenting voices by the leftist media and big corporations.”
The companies say that they are not discriminating based on ideology but only removing content or suspending people for violating published standards designed to keep their platforms free of hate speech, harassment, dangerous misinformation, and incitement of violence. And they can point to millions of conservatives — including members of Donald Trump’s immediate family — who continue to use these platforms to, among other things, complain about censorship. If Facebook and Twitter are trying to censor conservative thought, they’re not doing a very good job.
Getting it from both sides
While almost no one is advocating censoring any ideological speech, there are many who point out that these platforms remain a cesspool of misinformation where election deniers, conspiracy theorists, anti-vaxxers and anti-maskers and others spew dangerous content that threatens both our democracy and public health. Yes, Facebook and Twitter are getting it from both sides. There are some who claim they censor too much and others who say they don’t censor enough.
Even the issue of mental health is nuanced. Whistleblower Frances Haugen correctly cited Facebook’s own research that acknowledged, “We make body image issues worse for one in three teen girls,” and “Teens blame Instagram for increases in the rate of anxiety and depression.” But as NPR put it, “Facebook’s own data is not as conclusive as you think about teens and mental health.” That leaked data consisted of teens’ opinions, not actual facts. While opinions are important, they don’t necessarily tell the whole story. A lot of people have opinions about the crime rate, for example, that often conflict with actual crime statistics. NPR pointed to a study conducted in 2015 and published in 2020 who’s co-author, Candice Odgers said, “If you ask teens if they are addicted/harmed by social media or their phones, the vast majority say yes,” But she added, if you actually do the research and connect their use to objective measures … there is very little to no connection.”
The non-profit group, Common Sense Media, has been very critical of Facebook’s impact on youth over the years, yet in 2021, it published a study led by respected researcher Victoria Rideout, that found that “43% of 14- to 22-year-old social media users say that when they feel depressed, stressed, or anxious, using social media usually makes them feel better, compared to just 17% who say it makes them feel worse. The rest say it makes no difference either way.” Unlike the internal product study of 150 teens cited by Haugen, Rideout’s study involved 1,513 teens with an error rate of 3.64%. It’s impossible to know the error rate of that internal small study, though frankly, you don’t need a study to know that some teens will be negatively affected by some content. I was a chubby kid and sometimes had self-esteem issues when I saw better-looking boys at swim and beach parties, so it’s no surprise that some teens with body image issues will feel worse when they are bombarded by images of beautiful looking peers who, as we wrote in the newly revised ConnectSafely Quick-Guide to Instagram “spend a lot of effort making themselves look really good via makeup, lighting, wardrobe, and in some cases, even plastic surgery.”
Yes, we do need to take Frances Haugen’s criticisms of Facebook seriously and the company does need to redouble its safety efforts, but we also need to put context around these issues and always consider the bigger picture and the risk of unintended consequences before making any policy decisions over social media, which, like those pills in your medicine cabinet, have benefits, risks and side effects. As I said, it’s complicated.
Disclosure: Larry Magid sits on Facebook’s Safety Advisory Board and is CEO of ConnectSafely.org, a nonprofit internet safety organization that receives financial support from Facebook and other companies.