This post first appeared in the San Jose Mercury News

This post first appeared in the San Jose Mercury News

By Larry Magid

When I started to write about technology, I had no idea I would someday write a column about both the First and Second Amendments. But, given the way the internet has woven itself into our lives, it’s almost impossible to cover technology without thinking about how it affects our rights.

The First Amendment has been part of the cyber-vocabulary since at least 1996 when Congress passed the Communications Decency Act, which would have effectively banned internet porn and other content that some authorities might have deemed to be “patently offensive as measured by contemporary community standards.”The law was eventually struck down by the Supreme Court on free speech grounds.
But the First Amendment was invoked in numerous other legal cases including subsequent child protection laws, several of which were also struck down by courts because of their potential chilling effect on adult speech. Last year, in Elonis v. United States, the Supreme Court overturned the conviction of a man who posted on Facebook about killing his wife, based on the assertion that even though the words might have seemed threatening, they were an expression of art rather than a direct threat.

Still, there are plenty of cases where speech can and has been limited. The 1919 Supreme Court ruling that the First Amendment doesn’t protect the right to shout “fire” in a crowded theater still stands, as does the prohibition of production, distribution and possession of child abuse images or so-called “child pornography.” Yet, in a 6 to 3 ruling in 2002, the court found that “virtual child pornography” is protected speech as long as no actual children are being abused in the production of the material.

I’m no legal scholar, but it strikes me that what the court has been doing since 1919 is distinguishing between what many might consider offensive versus what constitutes real and present danger.
bill_of_rightsOnline radicalization
Today, the free speech battleground is focused largely on terrorist threats and online radicalization. In 2010, the Supreme Court ruled in Holder v. Humanitarian Law Project that knowingly providing material support to an organization engaged in terrorism could be upheld as illegal, but the court was careful to distinguish between direct material support and general advocacy. As reprehensible as groups like ISIS may be, it’s not illegal to support their goals and objectives but it is illegal to support their murderous tactics.
But, just because something may be legal, doesn’t mean that it belongs on social media.
Companies like Facebook and Twitter have their own terms of service that prohibit many forms of speech, including pornography, hate, harassment and support of terrorism or terrorist organizations. Even if that speech is legal, it can nevertheless be banned on these privately owned services that are allowed to limit what can be posted.
But, it’s worth remembering that with its 1.65 billion active users, Facebook’s “population” is bigger than any country. That justifies the pressure to be thoughtful and fair about such rules, which is why Facebook, Twitter, Snapchat and many other companies consult with legal scholars, safety experts and advocacy groups (myself among them as part of my work at in their ongoing attempts to get it right.
They sometimes err by being either too restrictive or too permissive but they do try to strike a balance by encouraging free expression while trying to prevent speech that’s harassing or threatening or that encourages or celebrates violence.
Tech and guns
During this presidential campaign, and in the wake of too many tragic shootings, there is renewed discussion about Second Amendment rights and gun control. As with speech and the First Amendment, there are those who would argue that any restrictions on our right to bear arms is unconstitutional. Yet just as most free speech advocates agree that there can be some limits on harmful speech, polls show that most gun owners can live with reasonable restrictions on the type of weapons that should be available to private citizens as well as rules that keep weapons from some who might abuse them.
As with speech rights, technology has a role to play. As President Bill Clinton did in the 90s, President Obama has recently called on gun makers and technology companies to collaborate on “smart guns” that could make sure that only the authorized user or owner of the gun could fire it. There already examples of biometric devices like the Safe Gun Technology (SGTi) fingerprint reader, which can be added to existing guns, along with guns that can only be fired if the shooter is wearing a radio transmitting ring, wristband or, like the German made Armatix iP1 pistol, an RFID watch.
These technologies wouldn’t end all gun tragedies but they would help prevent children firing a gun they find in a home or a criminal using a police officer’s gun to attack the officer or shoot others, as has occurred several times in the Bay Area alone.
Unfortunately, when major gun makers Colt and Smith and Wesson agreed with the Clinton administration to make such guns in the late 90s, they were met with opposition by the NRA, which worried (correctly in the case of a New Jersey law) that it would open the door to the government banning weapons without such technology. The author of that New Jersey law has proposed legislation to repeal it and replace it with a less restrictive measure that would encourage smart guns while allowing for the sale of regular guns.
Another nexus between tech and the Second Amendment are proposals that would prevent online gun sales by private parties. A 2011 New York City undercover investigation found that nearly two-thirds of online gun sellers were willing to sell a gun to someone who admitted that they probably couldn’t pass a background check. I’m the last on