A couple of decades ago — before former Mayor Rudolph Giuliani’s anti-crime initiative helped reduce lawlessness — I stayed with a friend in New York but was annoyed at how much work I had to do to get into his apartment. I had to unlock a bolt lock, a regular door lock and a “police lock,” and disable an alarm. He installed all this security, of course, to keep intruders out, but it also made it really hard for his invited guest to enter.
I sometimes feel the same way when I try to get into a website where I’m an invited guest. In an effort to keep “bots” (automated computer systems) from sending spam or other abusive practices, many sites require the user to complete a test of some sort to prove that they are really a human being. Often this is a “Completely Automated Public Turing test,” or CAPTCHA, that presents you with weirdly displayed and distorted letters and numbers that are not only difficult for computers to automatically recognize, but hard on humans too.
I often have trouble deciphering the exact characters to type in a CAPTCHA, and it’s not uncommon for me to give up after several tries. Sometimes they’ll have an alternative method such as listening to an audio recording of the characters but that requires you to turn up the speakers or use headphones, which isn’t always practical. And even the audio is sometimes hard to decipher, in an effort to thwart voice-recognition software.
And, ironically, even though these CAPTCHAs sometimes thwart humans like me, they can’t always fool computer systems. Google discovered that there are algorithms that can “decipher the hardest distorted text puzzles” with better than 99 percent accuracy.
Increasingly we’re seeing optional “dual factor authentication,” in which the site sends a unique code to your smartphone or other device that you’re required to type in before you can enter. Dual factor authentication is more secure, but it won’t work if your cell phone has a dead battery or you don’t have it with you.
We need better security. Usernames and passwords are no longer sufficient, and I certainly understand why sites would use CAPTCHAs to cut down on machine-generated spam, but we need to find systems that are hard for bad guys to break but easy for good folks to use.
Google is one of the companies that has used those frustrating CAPTCHAs, but it’s now switching to a more sophisticated yet easier to use solution called the “No CAPTCHA reCAPTCHA”
From a user perspective, the process couldn’t be easier. You just check a box that says “I’m not a robot.”
It doesn’t just take your word for it. Using what Google calls “an Advanced Risk Analysis back end,” the new system “actively considers a user’s entire engagement with the CAPTCHA” by evaluating a broad range of cues that distinguish humans from bots, according to the Mountain View Internet company.
It’s not perfect, so Google sometimes requires a user to solve a puzzle before allowing entrance. Google said that it worked 60 percent of the time on tests conducted by WordPress and more than 80 percent on Humble Bumble; the system is also being tested on Snapchat.
As with a lot of clever things that Google does, there is a bit of a creepy factor: These algorithms work by examining and recording our hand or mouse movements as we try to enter sites. I’m not sure how Google could exploit that information but I can understand how — along with everything else Google knows about us — it could worry privacy advocates. However, any systems that make it easier for sites to protect themselves from spam should also be good news to the privacy community, so this is probably a trade-off in which privacy benefits outweigh creepiness.
Google admits that it has more work to do to improve its CAPTCHA technology and indeed, so do other companies that are trying to find ways to improve security without making life harder for legitimate users. There are trade-offs not completely unlike those that Mayor Giuliani’s supporters and critics pointed out as he was cleaning up Times Square — one person’s secure environment is another person’s police state. As long as we have to keep bad guys out, the rest of us will have to put up with some inconvenience and worry about being scrutinized.
Disclosure: Larry Magid is co-director of ConnectSafely.org, a nonprofit Internet Safety organization that receives financial support from Google
This post first appeared in the San Jose Mercury News.