Share this...

By Larry Magid
This post first appeared in the San Jose Mercury News.

For the past 12 years, the United Nations has sponsored the Internet Governance Forum (IGF), a “multi-stakeholder” event that brings together people from governments around the world along with tech executives, nonprofits and academics. This year it was held in Geneva, Switzerland.

Although there are some technical sessions, most of the focus is on internet policy — issues that countries are struggling with as the internet becomes increasingly integrated into our  lives. As usual, I’m speaking at sessions focused on child safety, but this year, there are some newer issues on the table including the advantages and risks of artificial intelligence, blockchain technology and fake news.

Listen to Larry’s segment about IGF on Washington DC’s WTOP radio

Fake news seems to be the most dominant subject. It came up in nearly every session I attended, including some that seemingly had nothing to do with the subject. As in the United States, people around the world are concerned about its impact on elections and social discourse, and there is plenty of evidence of state-sponsored fake news affecting elections in several countries. There are two distinct types of “fake news.” There is “real” fake news. Real in the sense that the term clearly applies because such stories are either totally fabricated or at least largely untrue. And then there is “fake” fake news when someone uses that term to refer to news outlets or stories that they don’t agree with. Despite high-profile claims to the contrary, when a journalist makes a mistake, it’s not fake news, especially if the story is corrected.

Fake news even came up in a session I attended on blockchain technologies. Blockchain, is the technology behind bitcoin and other cryptocurrencies. It’s essentially a ledger or database that can be distributed across multiple devices. It allows people to exchange information or value without revealing unnecessary details including the people involved or what is being exchanged. As you’d expect, it is being used for illegal drug and weapons deals and other shady transactions, but there are lots of legitimate users and plenty of well-respected players, including IBM. Even major banks are investigating blockchain as a way to create confidential ledgers for the way they account for funds.

Blockchain makes it possible to authenticate a transaction, a thing or a person without having to reveal details to other parties. Imagine if you had a driver’s license or a passport that didn’t have your name or your picture, but when you showed it to authorities, they could issue you a traffic ticket or let you into a country without ever having to know your name.
One of the blockchain experts at the session said that it is already being used to authenticate news sources to make sure they’re legitimate or at least accountable. Others suggested it could be used to provide identity authentication for undocumented refugees who may not have passports.

Artificial intelligence was also a big topic at IGF.  I attended a workshop on the Social Responsibility and Ethics in Artificial Intelligence, chaired by Urs Gasser, director of the Berkman Klein Center for Internet & Society at Harvard University.

Three of the speakers were from China and each emphasized how the Chinese government is investing in AI, pointing out that China’s researchers are focused mainly on developing practical applications vs. the more theoretical research taking place in the U.S.

Yi Ma, who’s about to join the Computer Sciences department at UC Berkeley pointed out that China is already on par with the U.S. in AI and is on track to becoming the world leader by 2030, especially when it comes to developing AI applications.

Like others on the panel, he argued that the benefits of AI are enormous despite the challenges, which include the privacy implications of the vast amount of data collected, the potential for the humans who program AI to inject their own biases into the code and the risk of bad actors creating their own AI applications or injecting malicious code into otherwise benign applications. All of the panelists agreed that jobs will be eliminated, including some white collar jobs that have, so far, escaped being affected by automation. However, all also agreed that AI will usher in new jobs. Still, just as has long been true with jobs replaced by technologies, there will be some who will be able to move on and thrive as a result of the new technology and others who get left behind.

The panelists laughed at but dismissed the Hollywood plot line of rogue machines morphing into super-intelligent combatants in a power struggle with humans, creating an existential threat to our survival. While such scenarios sell movie tickets, they don’t reflect the likely risks of AI, which is typical of how moral panics about new technologies often focus on the wrong risks.

And speaking of morphing and wrong risks, as I reflected on my time at this and previous forums, I couldn’t help but feel a little self-conscious about how the internet safety field that I’ve been involved with for more than 20 years has modified its own perceptions of risk. Early on, we focused almost exclusively on the risks of children’s access to pornography and many greatly exaggerated the likelihood of children being sexually abused by strangers they met online. Those risks, along with cyberbullying, remain real, but momentarily at least, they’re being overshadowed by newer risks that we couldn’t even imagine a few years ago.


Share this...