Share this...

by Larry Magid

At Facebook 2019 F8 developer’s conference, CEO Mark Zuckerberg said that “groups are now at the heart of the experience, just as much as your friends and family are.”  He was talking about an initiative to get Facebook users to join semi-private groups where they can connect with people who share their interests, their passions, or a common bond such as what type of car they drive, what team they support or even just family members or co-workers. Of course, groups can also be organized around issues, ideologies, and political parties,  plus there are plenty focused on such issues.

I belong to a few groups — mostly around my hobbies and professional interests — and most of the conversations are cordial and helpful. One group, dedicated to the car I drive, has provided me with numerous tips. One day, after a repair, I found a strange foam-plastic part in my backseat. I sent a picture of it to my local Tesla repair center, but they couldn’t identify it. I posted it on the Tesla Model 3 Owner’s Club’s page, and other members told me it was a high voltage battery cover from under the passenger seat that the technician must have forgotten to reattach. Armed with that information, I got Tesla to put it back where it belonged.

But even in this mostly congenial group of Tesla owners, some people can get nasty, contentious or rude, sometimes making fun of what they consider to be a stupid question or getting into fights over the relative merits of different car brands, like the arguments I’ve observed for decades between Windows and Macintosh enthusiasts.

But these types of debates are mild compared with what you might see in political groups. I spent some time in one Facebook group where I found several highly contentious and rude comments along with several posts that contained false information, including the demonstrably false and dangerous claim that “Data shows more people have died because of the Covid Vaccines in 6 months than people who have died of Covid-19 in 15 months.”

All posts on Facebook, including within groups, are subject to being reviewed and removed by Facebook’s paid moderators, but the sheer volume of posts makes it impossible to police everything on the service. Besides, there could be some posts in groups that don’t violate Facebook’s rules but are nonetheless offensive or off-topic to the purpose of a group.  For that and other reasons, Facebook allows group administrators — typically the people who created the group — to do their own moderation and to control who gets to join the group.

Group administrators, for example, have long been able to set up rules for the group that go beyond Facebook’s general content rules and even ask questions to determine if the person is right for the topic. They can also opt to pre-approve or delete posts that don’t meet their criteria or violate their rules.  But, despite these tools, there can be conflict and inappropriate content, especially in large groups where the administrator may not be able to keep up with all the posts.

New tools

On Wednesday, Facebook announced, “new tools to help moderate conversations and potential conflict within a group,” These include the ability to automatically restrict people who don’t qualify to participate based on several options, such as how long they’ve had a Facebook account or whether they have recently violated group rules. The company is also testing what it calls conflict alerts to notify administrators “when there may be contentious or unhealthy conversations taking place in their group so they can take action as needed.” Administrators can proactively prevent comments they don’t want showing up, rather than deleting them after the fact and reduce “promotional content” by declining posts and comments with specific links.

Facebook is also empowering administrators “To slow down conversations” by limiting how often specific group members can comment, and how often comments can be made on certain posts.

Will help but not eliminate toxic content

While empowering group administrators to better police their groups will help, this move won’t eliminate toxic posts, conflict and misinformation. Unlike Facebook’s paid content moderators, administrators are volunteers who create and manage groups, typically because they are passionate about the subject. Some people have created groups specifically to share misinformation or spread divisive or derogatory or even dangerous content.  On November 5, just two days after the 2020 presidential election, Facebook removed a “Stop-the-Steal” group because of “worrying calls for violence from some members of the group.” The group, which was based on the false premise that Biden stole the election, acquired 360,000 members in its first 24-hours. And two months later, there was violence at the U.S. Capitol by people who wanted to overthrow the results of that election.

Legislative pushback

Empowering group administrators is only one of many things Facebook and other tech companies are doing to reduce the spread of false information, the risk of violence and to enforce their other rules of engagement. After the January 6 insurrection, President Donald Trump was removed from Facebook because Facebook felt his posts were encouraging violence, which is clearly a violation of their longstanding rules. But they are up against a lot of opposition, including proposed laws in numerous states that would limit their ability to enforce their rules. In May, Florida Gov. Ron DeSantis signed a law that would make it illegal for social media companies to ban a candidate for state office for more than 14 days even if they violate that company’s rules. A legal challenge filed by NetChoice and the Computer & Communications Industry Association asserts that the law “discriminates against and infringes the First Amendment rights of these targeted companies, which include plaintiffs’ members, by compelling them to host — and punishing them for taking virtually any action to remove or make less prominent — even highly objectionable or illegal content, no matter how much that content may conflict with their terms or policies.”

While I agree that government oversight makes sense when it comes to platforms as powerful as Facebook, I think it’s dangerous for government to prevent the company from enforcing rules designed to protect its users and society as a whole. The government rightly requires carmakers to install seatbelts and airbags, but it doesn’t prevent them from taking other safety measures. Local and state governments regulate bars, but I’ve never heard of a bartender being punished for refusing to serve someone who is clearly intoxicated or for evicting a customer who threatens or harms others in the bar.

Disclosure: Larry Magid is CEO of ConnectSafely.org, a non-profit internet safety organization that receives financial contributions from Facebook and other tech companies.


Share this...