A US law about children’s online services can really only regulate US-based children’s online services. It might influence foreign regulators but it has no jurisdiction over sites and services based outside the US and can’t stop US users from leaving compliant services and going to noncompliant ones outside the US (or in it, for that matter). So, by the global nature of the Internet, there’s no guarantee that a law like COPPA (the Children’s Online Privacy Protection Act) can keep kids safe because it can’t keep them in the sites over which it has jurisdiction. See the logic of this? There is this danger, greater than ever in this user-driven social medium, of a law creating a false sense of security – whether in lawmakers or parents.
That’s why we ConnectSafely folk just filed a comment to the FTC concerning its latest proposed COPPA revisions. How to regulate a medium that is increasingly social or behavioral, global, and updated by users themselves in real time, 24/7? At the very least, regulators need to strike a very difficult balance between government regulation and self-regulation (through education). The more government tries to regulate, requiring protections that by definition restrict children’s free expression, the less likely it is that children will stick around to “enjoy” that safety. If they do go underground to sites, apps, games or other services that aren’t compliant – whether in the US or outside of it – they are, purely logically, less protected. US policymakers don’t really want to require that level of safety, do they? I don’t think they can afford to assume that enough young customers will stick around either to attract the restricted advertising allowed kids’ sites or cover the additional cost of doing business. There’s already an example of how the latter didn’t happen for the now-defunct, ultra-safe LEGO Universe (see this). As for parents, we’ve already seen that, when they aren’t seeing the benefits of such regulation, they encourage their kids to find workarounds.
The most reliable, enduring safeguard
So can you see how – in a global, user-driven, social medium – user self-regulation is becoming the most effective protection? Where kids are concerned, it’s developing the filter between their ears, as my ConnectSafely co-director Larry Magid termed it back in the 1990s. Riffing on that a bit, I can tell you for sure that this is the safeguard that…
* Arrives with them pre-installed, free of charge
* Is with them wherever they go
* Improves with use
* Runs continuously
* Lasts a lifetime
* Supports and enhances all applications.
It just needs plenty of practice at home and school, in all kinds of situations with all kinds of content and collaborators, in digital environments and physical spaces. It operates best with conscious (mindful) application – the awareness of how it’s both protective and helpful, how it makes all interaction (with content and people) go better, and how it improves performance, both academic and social. Ideally, it includes social-emotional learning at school as well as at home, the kind that embraces social interaction in any kind of space. This education in self-regulation empowers by turning users into stakeholders in their own and each other’s well-being online and offline (for more on this, see “What does ‘safe’ really look like in a digital age?”).
* My co-director Larry Magid’s post about the comment
* My post last August about “The minimum age & other unintended consequences of COPPA”
* Just a few of many past posts about the FTC’s fine investigative work in children’s privacy protection: on apps for cellphones and concerning virtual worlds and perspective from virtual world moderators