Online fake news and “urban legends” have been around for decades but it’s become a major issue in the wake of the 2016 presidential election. During this period, millions of people were exposed to fake stories about the candidates, often posted on sites that were financially motivated get people to click on ads around sensational but false content. Although there were some fake stories critical of Donald Trump, the preponderance of stories took aim at Hillary Clinton, according to many published reports.
Although it was never proven, some have alleged that fake news on Facebook affected the outcome. But regardless of whether that’s true, the presence of fake stories did get the attention of Facebook executives who, on Thursday, came up with a partial solution.
Facebook won’t ban fake news, but it will encourage users to report stories that they believe to be fake.
Those reports will be analyzed with the help of third-party fact checking organizations that are signatories of Poynter’s International Fact Checking Code of Principles. In a blog post, Facebook said, “if the fact checking organizations identify a story as fake, it will get flagged as disputed and there will be a link to the corresponding article explaining why.” The company also said that stories that have been disputed may also appear lower in News Feed.
The company said that stories that have been flagged can still be shared but, if you do share such a story, you’ll find see a notice that the story has been “disputed by 3rd parties.”
“We’ve found that if reading an article makes people significantly less likely to share it, that may be a sign that a story has misled people in some way. We’re going to test incorporating this signal into ranking, specifically for articles that are outliers, where people who read the article are significantly less likely to share it,” said Facebook’s Vice President News feed Adam Mosseri.
This move won’t completely eliminate fake news, but it will serve as a warning for those inclined to share such stories.
Personally, I think this is a good approach because it doesn’t suppress the stories but does put people on warning that the news has been found to be fake. I also like that Facebook is working with reputable third party fact checkers rather than attempt to do this on its own. Some Facebook engineers have reportedly been working on algorithms to identify fake news but algorithms can be wrong and any technology that attempts to filter to remove content is subject to errors.
What Facebook shouldn’t and doesn’t want to do is to be in the position of deciding what’s legitimate or what isn’t, especially when politics and ideologies are involved. The company agrees. “We believe in giving people a voice and that we cannot become arbiters of truth ourselves,” said Mosseri.
Not just politics
Although political fake news has dominated the discussion lately, this is not a new phenomenon nor is it limited to political stories. Decades ago there was a persistent story that the post office is considering a 5 cent tax on email, and that story is still circulating. As I’ve been saying for years, you are responsible for what you post and share.
It’s not usually difficult to verify or debunk a rumor. In most cases all you have to do is highlight part of the text in a browser, right click and select search to have a search engine look for any web references to the text. If it’s a known hoax, you’ll find something on Snopes.com or another rumor site to prove that it’s false.