Instead of asking major websites to censor content, let's establish a system that helps quality content rise to the top.
Given how fragmented the world is politically, culturally and economically, the internet is an amazing achievement. People all over the world have access to the same content, with the ability to express their views and share their creations. That openness is coming under threat as governments increasingly ask websites such as Facebook or YouTube to remove objectionable content, effectively acting as censors.
The governments' motives may be sound: even societies that treasure freedom of speech usually protect community members by banning hate speech; some content is clearly intended for adults and should not be seen by children; and misinformation (so called "fake news") is having international repercussions as voters are increasingly influenced by what they read onscreen.
Social networks do bear a responsibility for the content they disseminate, but they cannot solve this problem alone. Relying on their judgement will result in a patchwork of inconsistent enforcement, and sets a dangerous precedent by turning commercial entities into public censors.
Nor is it particularly effective: Taking down content can end up like whack-a-mole, with the same content popping up somewhere else the moment it's taken down. A better approach is to provide a mechanism that gives users a clearer understanding of the nature and authenticity of their content, so that they can more easily find and trust in the type of content they want to see.
There is a precedent for this in the entertainment industries: film certificates give viewers a good understanding of what audiences a film will be suitable for, and the PEGI age rating system is used in over 30 European countries for computer games.
We already have a solution that enables web users to trust the sites they visit, too. The Secure Sockets Layer (SSL) (and successor) technologies give web users confidence in the ownership of a website they are visiting. It's most often used for protecting sensitive information on transactional websites, but it is increasingly being used on content websites too.
What if these two ideas were combined, so that content certificates were available that attest to the source of web content and provide guidance on its content. Web publishers would have the ability to publish different content streams for different audiences, and empower the users to choose the content they wish to view. This doesn't solve all of the problems, of course. The biggest challenge might be to get content owners to participate. The ICRA content rating system enabled website owners to voluntarily classify their content, but closed in 2010 because the system was not widely accepted. While adult content providers might have a business motive in ensuring their content remains accessible to their customers, providers of fake news and hate speech are unlikely to want to rate their content or put their name to it.
Comments