Its creator, Adam Hildreth, 22, calls it the Anti-Grooming Engine, The Guardian reports. "He claims the product is 99.9% effective in identifying adults online with a sexual motivation," and it's not keyword filtering. "The software is designed to look out for conversation patterns, typing speed, use of grammar and punctuation, and any aggressive or bullying language. Using extracts of online conversations between young people as examples of 'good' data, it is fed into the computer and compared with conversation gathered from that of suspected groomers." And the computer, he says, "learns" to tell the difference. CyberSentinel in the US has made some similar claims in the past, indicating that others have thought of this approach (see this in 2001). The proof is in the pudding, though, The Guardian cites one child-safety advocate as saying, and the pudding's not done yet – check out the article to get the full picture. Here's info in this site about "How to recognize grooming".
NetFamilyNews – by Anne Collier
- Students called heroes in this 6th-grade class
- In the face of school violence, what do we default to?
- Popularity: The other kind of vulnerability
- FB & Oculus VR: The potential of a virtual-reality platform
- What’s (importantly) different about Snapchat
- We ‘like’ faces in social media: Study
- Yik Yak update: How the app came to geo-fence off US schools
- Smart safety: YouTube’s ‘neighborhood watch program’
Analysis & News – by Larry Magid
- Facebook’s ‘Nearby Friends’ feature: What you need to know
- Identity theft a problem from cradle to grave — Kids most vulnerable
- How to protect your family from Heartbleed security flaw (slideshow)
- Beware of Heartbleed inspired phishing scams
- Are sites you use vulnerable to Heartbleed security flaw?
- Microsoft ends support of Windows XP: Machines highly vulnerable to security risks
- The evolution of online safety: Lessons learned over 20 years
- Safety through mindfulness: Watch ‘The Science of Character’