This is the first installment in a four-part blog series surveying global intermediary liability laws. You can read additional posts here:
The vast majority of internet users around the world interact with online intermediaries—including internet service providers (ISPs), search engines, and social media platforms—on a regular basis. These companies play an essential role in enabling access to information and connecting people across the globe, and are major drivers of economic growth and innovation.
Therefore, the policies that intermediaries adopt to govern online marketplaces and platforms significantly shape users’ social, economic, and political lives. Such policies have major implications for users’ fundamental rights, including freedom of expression, freedom of association, and the right to privacy.
The increasingly powerful role of intermediaries in modern society has prompted a host of policy concerns. One key policy challenge is defining online intermediaries’ legal liability for harms caused by content generated or shared by—or activities carried out by—their users or other third parties.
We are concerned by the growing number of governments around the world that are embracing heavy-handed approaches to intermediary regulation. Policymakers today not only expect platforms to detect and remove illegal content, but are increasingly calling on platforms to take down legal but undesirable or “harmful” content as well.
Recent government proposals to address “harmful” content are dangerously misguided and will inevitably result in the censorship of all kinds of lawful and valuable expression. Harsher liability laws for online intermediaries encourage platforms to affirmatively monitor how users behave; filter and check users’ content; and remove or locally filter anything that is controversial, objectionable, or potentially illegal to avoid legal responsibility. Examples of such proposals will be discussed in part three of this blog series.
Faced with expansive and vague moderation obligations, little time for analysis, and major legal consequences if they guess wrong, companies inevitably overcensor. Stricter regulation of and moderation by platforms also results in self-censorship, as users try to to avoid negative repercussions for their artistic and political expression. And, without legal protection, service providers easily become targets for governments, corporations, and bad actors who want to target and silence users.
The next few years will be decisive for the core rules that govern much of today’s internet. In this light, we offer up this four-part blog series, entitled Platform Liability Trends around the Globe, to help navigate the jungle that is global intermediary liability regulation.
We begin by providing some background information and exploring global shifts in approaches to intermediary liability laws. In Part Two we’ll unpack different approaches to intermediary liability, as well as explore some regulatory “dials and knobs” that are available to policymakers. Part Three will take a look at some new developments taking place around the world. Finally, in Part Four, we’ll dig into EFF’s perspective and provide some recommendations as we consider the future of global intermediary liability policy.
A Brief History of Intermediary Liability Rules
Let’s start with a brief outline of intermediary liability laws, the political context that gave rise to them, as well as today’s changing political discourses surrounding them.
Generally, intermediary liability laws deal with the legal responsibility of online service providers for harms caused by content created or shared by users or other third parties.
Most intermediary liability regulations share one core function: to shield intermediaries from legal liability arising from content posted by users (the exact scope of this immunity or safe harbor varies across jurisdictions, as will be discussed later in this series). These laws acknowledge the important role online service providers play in the exercise of fundamental rights in today’s society.
The need to create specific liability rules became apparent in the 1990s, as internet platforms were increasingly sued for harms caused by their users’ actions and speech. This trend of targeting internet intermediaries led to a host of problems, from increasing the risks associated with investing in the fledgling internet economy to legal uncertainty for users and businesses, and the fragmentation of legal regimes across countries and regions.
Trying to counterbalance this trend, lawmakers around the globe introduced safe harbors and other liability limitations for internet intermediaries. In protecting intermediaries from liability, safe harbor laws pursue three goals: (1) to encourage economic activity and innovation, (2) to protect internet users’ freedom of speech, and (3) to encourage intermediaries to tackle illegal content and to take actions to prevent harm.
A Shift in Tone—from Liability Exemptions to Responsibility
These goals are still highly relevant even though today's online environment is different from the one for which the early regulations were enacted. Today, a handful of companies are dominant global players on the internet and have become ecosystems unto themselves.
There are many potential responses to the dominance of "big tech.” At EFF, we have long advocated for interoperability and data portability as part of the answer to outsized market power. Liability exemptions are not a “gift to big tech”; rather they ensure that users can share speech and content over the internet using a variety of services. Nevertheless, some consider liability exemptions as giving an unfair advantage to the dominant platforms.
Political discourse has also moved on in important ways. Internet intermediaries—and especially social media networks—are spaces where a considerable amount of public discourse occurs and often play a role in shaping discourse themselves. In recent years, major global events have catapulted social media platforms into the limelight of the public’s attention including: foreign interference in the 2016 US Presidential Election; the Cambridge Analytica scandal; the ethnic cleansing of Rohingyas in Myanmar; the 2018 Christchurch mosque shootings; and the spread of misinformation threatening the integrity of elections in countries like Brazil, India, and the United States.
As a result of the widespread perception by both the public and policymakers that companies’ responses to recurring problems like misinformation, cyberbullying and online hate speech have been insufficient, online intermediaries are under increased scrutiny. This “techlash” has thus given rise to calls for new and harsher rules for online intermediaries.
Recent accountability debates have shifted the focus towards platforms’ assumed obligations based on moral or ethical arguments concerning the public role of online intermediaries in a democratic society. Rather than focusing on a utility or welfare-based approach to liability limitations, policymakers are increasingly moving towards a discourse of responsibility. Because so many people rely on them to communicate with each other, and because they appear so powerful, online platforms—and in particular social media services—are increasingly viewed as gatekeepers who have a responsibility towards the public good.
This expectation of intermediaries to respond to current cultural or social norms has led to two related policy responses both centering on the need of platforms to take on more “responsibility”: an increasing reliance on corporate social responsibility and other forms of self-intervention by intermediaries, and a greater push for legally requiring platforms to establish proper governance structures and effectively tackle user misconduct. Some suggestions center on the need for users’ platforms to take more effective voluntary actions against harmful content and adopt moderation frameworks that are consistent with human rights. Still more aggressive and dangerous policy responses consider upload filters and proactive monitoring obligations as a solution.
EFF has long worked to provide guidance in response to shifting norms around this topic. In 2015, we, as part of an international coalition, helped launch the “Manila Principles on Internet Liability,” a framework of baseline safeguards and best practices based on international human rights instruments and other international legal frameworks. In 2018, EFF and partners then launched the Santa Clara principles on Transparency and Accountability in Content Moderation, which call on intermediaries to voluntarily adopt better practices. In 2021, a new version of the principles was developed, with a focus on adequately addressing fundamental inequities in platforms’ due process and transparency practices for different communities and in different markets. For this revision, the Santa Clara Principles coalition initiated an open call for comments from a broad range of global stakeholders. Feedback was received from allies in over forty countries, and the second iteration of the Santa Clara Principles was launched in December 2021.
The current political climate toward intermediary regulation and changing market conditions could lead to a shift in the basic ideas on which current safe harbor regimes are based. We at EFF believe this may turn out to be a slippery slope. We fear the consequence of stricter liability regimes could be the loss of safe harbors for internet intermediaries, reshaping intermediaries’ behavior in ways that ultimately harm freedom of expression and other rights for internet users around the world.
These themes will be explored in more detail in subsequent blogs as part of this four-part series, Platform Liability Trends Around the Globe. Many thanks to former EFF Mercator Fellow Svea Windwehr, who conducted a first analysis about platform regulatory trends, and former EFF intern Sasha Mathew, who assisted in writing the blog post series.