Share this...

by Larry Magid
This post first appeared in the Mercury News

In 1984, James Cameron directed a science fiction movie where Arnold Schwarzenegger starred as The Terminator — a robot capable of killing humans. It’s a common theme in movies. Robots with artificial-intelligence-driven algorithms become monsters that unleash horrors not fully intended by the humans who programmed them.

For now at least, such stories remain the realm of science fiction, but there are real algorithms that create real harm. And, like fictitious robots run amok, some of these algorithms have both intended and unintended consequences.

If you spend any time online, you’ve seen some of the results of these algorithms. For example, if you shop for shoes, you are likely to see shoe ads going forward because computers know what you looked for and use various techniques to make sure you subsequently see ads for similar products. While it affects our sense of privacy and can be annoying or even creepy, this type of targeting is relatively benign. The world won’t come to an end if we see one too many ads for shoes or handbags.

But algorithms help sell us not just products, but ideas that affect our behavior, our voting and how we interact with other people. On a simplistic level, they can work just like product ads but instead of just tracking our shopping habits, they track our interests, who we interact with and even what we post. That’s one reason what you see on Facebook and other social media platforms is different than what I see. Just as shoe shoppers are more likely to see shoe ads, people who interact with liberal content are likely to see more liberal content — likewise for conservatives or people interacting on any interest in life.

One could argue that this is a good thing because it presents us with information that interests us. But it can also be bad, because it divides us into bubbles or “tribes” that reinforce our biases and play on our emotions. For some, it also feeds a media diet based at least in part on mistruths that are seldom if ever questioned by those who post them or comment on other’s posts. It’s one of several reasons Americans and others around the world are so divided, polemic and too often misinformed.

The consequences can, literally, be fatal. Many who participated in the deadly insurrection at the Capitol on Jan.6 were influenced by their social media feeds. It’s among the reasons otherwise good citizens wind up buying into conspiracy theories that were once only the province of a radicalized fringe group. Today, there are too many people on the “fringe,” for the term to even apply. If everything you see reinforces a notion — however insane it might be — that notion becomes your reality.

The consequences of medical misinformation have cost thousands of lives. Many of those who have refused to be vaccinated have based their decisions on inaccurate social media posts about so-called dangers of vaccines which — in reality — pale compared with the danger of not being vaccinated. There are some who refuse to wear masks on the incorrect belief that they are harmful. Others refuse to follow basic social distancing or other public health recommendations because they still believe COVID-19 to be a hoax. I haven’t seen many dystopian killer robot movies, but it’s hard to imagine robots killing as many people who have died from COVID-19 because of information fed to them by robotic algorithms.

Social divisions aren’t new. When I was a kid there were liberal and conservative newspapers with opinion columns designed to promote and reinforce political perspectives. But — when it came to the actual news of the day — these papers mostly reported the same facts. Opinion writers would complain about the results of an election, but they would agree on the outcome.

No shortage of accurate information

There is no shortage of accurate information on Facebook. The social media giant partners with numerous legitimate publishing organizations that strive to report facts. My own feed includes links to many articles from legitimate and mostly accurate sources, including government sites staffed by respected scientists. But other people’s feeds are different and there is nothing to stop someone from posting a link to a spurious source which can be amplified as a result of the algorithms that keep us in a bubble filter.

Algorithms have other tasks. Some, especially on sites aimed at children and teens, act as moderators to help enforce rules against foul language, cyberbullying or self-harm. These too have benign intentions and are somewhat effective, but people find ways to get around them by coming up with words, memes and spellings that the algorithms don’t yet understand.

Good intentions with bad outcomes

Facebook has tried to rein in its algorithms or at least refine them to do less harm and more good. But, like a medicine with bad side effects or a surgery gone bad, some of these changes may have made things worse. I was at Facebook’s F8 developers conference in 2018 when CEO Mark Zuckerberg announced a change in the algorithms to encourage “meaningful social interactions” between friends and family. He also emphasized groups where people with common interests could interact, unhindered by outsiders.  But, as a recent Wall Street Journal article, “Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead,” illustrates, this well-intentioned move has some serious side effects.

While the Journal exposed some Facebook policy decisions apparently motived by financial gain, it also pointed to decisions designed to tone down the conversation and improve civility that unintentionally did the opposite. “Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism,” but whether intentional or not, “that tactic produced high levels of comments and reactions that translated into success on Facebook.”

Algorithms are a big part of the problem, but unlike fictitious killer robots, they remain under the control of humans who must be held accountable for their impact.

Disclosure: Larry Magid is CEO of ConnectSafely.org, a non-profit internet safety organization that receives financial support from Facebook and other companies.

 


Share this...