Share this...

by Larry Magid

This post first appeared in the Mercury News

I’m about to report on some interesting numbers from Facebook regarding inappropriate material it deleted from its service, but first I’d like to warn users about the types of materials you may be providing not just to Facebook but to third-party developers.

Beware of apps bearing ‘insight’

You have undoubtedly heard about the scandal where data from tens of millions of Facebook users were collected by a researcher, via a personal quiz called “thisisyourdigitallife” and turned over to Cambridge Analytica for the benefit of the Trump campaign. And now, according to New Scientist, another personality quiz app called myPersonality exposed personal data of 3 million Facebook users, including “the results of psychological tests.”

I’ll give you the same advice that I give my friends and family. Don’t take these tests and quizzes, regardless of whether you find them on Facebook, in the Apple or Android app store or on the web. It may be fun to find out what type of animal you resemble or who among your friends is a true soulmate, but there is a reason that developers of these apps have invested in something they’re letting people use for free. Facebook has suspended both of these apps from its platform and has tightened its practices regarding data collection, but I’m still wary of these types of apps.

That isn’t to say there aren’t legitimate uses for personality profiles. Dating sites such as eHarmony  ask users to fill out profiles, but it’s clear why they are doing so. Still, I recommend checking the privacy policies and reputation of anyone who is asking you to provide them with this type of  information.

“Misbehaving apps” wasn’t one of the categories included in the Community Standards Enforcement Preliminary Report that Facebook released Tuesday. Instead, the company revealed numbers on its takedowns of graphic violence, adult nudity and sexual activity, terrorist propaganda (ISIS, al-Qaeda and affiliates), hate speech, spam and fake accounts.

Bad content by the numbers

First the numbers.

  • Facebook took action on 583 million fake accounts in the first quarter (Q1) of 2018, down from 649 million in the fourth quarter (Q4) of 2017.
  • Action on graphic violence increased to 3.4 million pieces of content in Q1 2018 from 1.2 million in Q4 2017.
  • The company took action on 21 million “adult nudity and sexual activity” posts, images or videos each of these quarters and dealt with 1.9 million items of terrorist propaganda in Q1 2018 up from 1.1 million in Q4 2017.
  • Action on hate speech content rose to 2.5 million items in Q1 2018 from 1.6 million in Q4 2017. Facebook dealt with 836 million pieces of spam in Q1 2018, up from 727 million in the final quarter of 2017.

With the exception of fake accounts and sexual material, all the numbers went up, but that doesn’t necessarily mean that the problems were getting worse. In its report, Facebook said that much of the increase was due to better detection methods. In the graphic violence section, for example, the company said it used photo-matching “to cover with warnings photos that matched ones we previously marked as disturbing. These actions were responsible for around 70% of the increase in Q1.”

AI to the rescue

What I found most interesting was Facebook’s reporting on how many items it finds before users report them. Thanks to artificial intelligence and other software enhancements, Facebook is now able to detect a great deal of this content automatically. For example, the company said that 99.5 percent of the terrorist propaganda acted on in Q1 this year was “flagged by Facebook before users reported it.” That was also true for 85.6 percent of graphic violence, 95.8 percent of nudity and sexual content, 99.7 percent of spam and 98.5 percent of fake accounts.

The only exception was hate speech, where only 38 percent was flagged by Facebook before a user in Q1 of this year. The company said that hate speech content often requires “detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards.”

But please don’t use this as an excuse to not report bad content or behavior. Just as with hate speech, there are categories of bad content that are difficult for AI to detect, like many instances of cyberbullying and harassment. So please do use the reporting mechanisms on Facebook and the other platforms you use.

More to be done

This level of transparency is a good start, but there is still much for Facebook to clean up on its platform. Fake news remains a problem and is likely to get worse during upcoming elections. Unfortunately, it’s not easy for AI software to know the difference between what’s real and fake, so the service has to figure ways to enhance its software with human moderation.

How people treat each other on the service is also hard to control via software. It’s probably not a good idea for algorithms to police human behavior, and it’s not even something that moderators can do well, especially between adults who don’t know where to draw the line between spirited debate and demeaning comments.

Facebook also needs to make good on its promise to increase the number of well-trained human moderators who can make the right judgment calls when deciding whether to remove content or suspend users. Even knowing the difference between a legal name and a stage or professional name can be challenging for both humans and software. Life is nuanced, and Facebook’s arbiters of proper behavior have the difficult job of honoring those nuances while prohibiting harm.

Disclosure: Larry Magid is CEO of ConnectSafely.org, a nonprofit internet safety organization that receives financial support from Facebook and other tech companies.
 


Share this...