Can the social Web be policed?

The question is increasingly coming up around the world, as if this new user-driven medium is calling on all of us to try to think past profile deletions to real solutions.

By Anne Collier

In “Cyber-bullying cases put heat on Google, Facebook,” Reuters points to increasing signs around the world that people want to hold social-media companies responsible for their users’ behavior. “The Internet was built on freedom of expression. Society wants someone held accountable when that freedom is abused. And major Internet companies like Google and Facebook are finding themselves caught between those ideals,” it reports. Back before social networking, when people harassed or fought merely over the phone, people didn’t hold phone companies accountable for settling the disputes. In the US, the Communications Decency Act extended that “safe haven” to Internet service providers, and courts have included social-media companies in that category ever since.

Here’s the view from Australia, where the Sydney Morning Herald reports some cruel defacement of tribute pages in Facebook have gotten Prime Minister Kevin Rudd to consider “appointing an online ombudsman to deal with social networking issues.” [Maybe that’s where we’re headed: countries having ombudsmen able to decide if complaints in their countries should be “escalated” to their specially appointed contacts at social sites at home and abroad? But what about sleazy social-media operations that fly under the radar or refuse to deal?]

Certainly it’s understandable that people expect more from social network sites than they do from phone companies because bullying is more public and harder to take back, but is the expectation logical? That’s an honest question, not a rhetorical one (please comment in our forum), because what does not seem to be different in this new media environment is how arguments and bad behavior get resolved: by the people involved. It may take time with complaints sent from among tens and in some cases hundreds of millions of users, but fake defaming profiles and hate groups do get deleted by reputable social network sites like MySpace and Facebook. Deleting the visible representation of bullying behavior, however, doesn’t change much, is at best a temporary fix. Bullies can put up new fake profiles as quickly as – often more quickly than – the original ones can be taken down.

Of course we should expect companies to be responsible and take such action, but can we reasonably blame them if doing so has no effect on the underlying behavior? What court cases like the one in Italy against Google executives for an awful bullying video on YouTube that the court felt wasn’t taken down fast enough (see the article in the Washington Post above) illustrate are: humanity’s struggle to wrap its collective brain around a new, truly global, user-driven medium where the “content” is not just social but behavioral – and the full spectrum of human behavior at that.

If you do, please comment, but I know of no real solution to social cruelty on the social Web as yet, except a concerted effort on the part of the portion of humanity that cares to adjust to this strange, sometimes scary new media environment by adjusting our thinking and behavior. That includes teaching children from the earliest age, at home and school, social literacy as well as tech and media literacy (social literacy involves citizenship, civility, ethics, and critical thinking about what they upload as much as download) – as well as modeling them for our children. Can it be that universal, multi-generational behavior modification is not just an ideal, but the only logical goal? What am I missing, here?

Leave a comment