Proposals for Reasonable Technology Regulation and an Internet Court

I have seen the outlines of a regulatory and judicial regime for internet companies that begins to make sense to me. In it, platforms set and are held accountable for their standards and assurances while government is held to account for its job — enforcing the law — with the establishment of internet courts.

I have not been a fan of net regulation to date, for reasons I’ll enumerate below. Even Mark Zuckerberg is inviting regulation, though I don’t agree with all his desires (more on that, too, below). This is not to say that I oppose all regulation of the net; when there is evidence of demonstrable harm and consideration of the impact of the regulation itself — when there is good reason — I will favor it. I just have not yet seen a regulatory regime I could support.

Then I was asked to join a Transatlantic High-Level Working Group on Content Moderation and Freedom of Expression organized by former FCC commissioner Susan Ness under the auspices of Penn’s Annenberg Public Policy Center and the University of Amsterdam’s Institute for Information Law. At the first meeting, held in stately Ditchley Park (I slept in servants’ quarters), I heard constructive and creative ideas for monitored self-regulation and, intriguingly, a proposal for an internet court. What I’m about to describe is not a summary of the deliberations. Though discussion was harmonious and constructive, I don’t want to present this as a conclusion or as consensus from the group, only what most intrigued me. What I liked about what I’m about to outline is that it separates bad behavior (defined and dealt with by companies) from illegal behavior (which must be the province of courts) and enables public deliberation of new norms. Here’s the scenario:

  • A technology company sets forth a covenant with its users and authorities warranting what it will provide. Usually, this document obligates users to community standards to govern unwanted behavior and content. But this covenant should also obligate the company to assurances of what it will provide, above and beyond what the law requires. These covenants can vary by platform and nation. The community of users should be given opportunity for input to this covenant, which a regulator may approve.
  • In the model of the U.S. Federal Trade Commission, liability arises for the company when it fails to meet the standards it has warranted. A regulator tracks the company’s performance and responds to complaints with the enforcement cudgel of fines. This monitoring requires the company to provide transparency into certain data so its performance can be monitored. As I see it, that in turn requires the government to give safe harbor to the company for sharing that data. Ideally, this safe harbor also enables companies to share data — with privacy protected — with researchers who can also monitor impact. (Post Cambridge Analytica, it has become even more impossible to pry data from tech companies.)

Now draw a hard, dark line between unwanted behavior and illegal acts. A participant in the meeting made an innovative proposal for the creation of national internet courts. (I wish I could credit this person but under Chatham House Rule, they wished to remain unnamed though gave me permission to write about the idea.) So:

  • Except in the most extreme matters (e.g., tracking, reporting, and eliminating terrorist incitement or child porn), a company’s responsibility to act on illegal content or behavior arises after the company has been notified by users or authorities. Once notified, the company is obligated to take action and can be held liable by the government for not responding appropriately.
  • The company can refer any matters of dispute to an internet court, newly constituted under a nation’s laws with specially trained judges and systems of communication that enable it to operate with speed and scale. If the company is not sure what is illegal, the court should decide. If the object of the company’s actions — take-down or banning — wishes to appeal, the court will rule. The company will have representation in court and affected parties may as well.
  • The participant who proposed this internet court argued, eloquently and persuasively, that the process of negotiating legal norms, which in matters online is now occurring inside private corporations, must occur instead in public, in courts, and with due process.
  • The participant also proposed that the court would be funded by a specific fee or tax on the online companies. I suspect that the platforms would gladly pay if this got them out of the position of having to enforce vague laws with undue speed and with huge fines hanging over their heads.

That is a regulatory and legal regime — again, not a proposal, not a conclusion, only the highlights that impressed me — which strikes me as rational, putting responsibilities in the appropriate bodies; allowing various platforms and communities to be governed differently and appropriately for themselves; and giving companies the chance to operate positively before assuming malign intent. Note that our group’s remit was to look at disinformation, hate speech, and other unacceptable behavior alongside protection of freedom of expression and assembly and not at other issues, such as copyright — though especially after the disastrous Articles 11+13 in Europe’s new copyright legislation, that is a field that is crying for due process.

The discussion that led here was informed by very good papers written about current regulatory efforts and also by the experience of people from government, companies (most of the largest platforms were not represented by current executives), and academics. I was most interested in the experience of one European nation that is rather quietly trying an experiment in regulation with one of the platforms, in essence role-playing between government and a company in an effort to inform lawmakers before they write laws.

In the meeting, I was not alone in beginning every discussion urging that research must be commissioned to inform any legislative or regulatory efforts, gathering hard evidence and informing clear definitions of harm and impact.These days, interventions are being designed under presumptions that, for example, young people are unable to separate fact from falsity and are spreading the latter (this research says the real problem is not the kids but their grandpas); or that the internet has dealt us all into filter bubbles (thesestudies referenced by Oxford’s Rasmus Kleis Nielsen do not support that). To obtain that evidence, I’ll repeat that companies should be given safe harbor to share data — and should be expected to then do so — so we can study the reality of what is happening on the net.

At Ditchley, I also argued — to some disagreement, I’ll confess — that it would be a dangerous mistake to classify the internet as a medium and internet companies as publishers or utilities. Imagine if Facebook were declared to be both and then — as is being discussed, to my horror, on the American right — were subjected to equal-time regulation. Forcing Facebook to give presence and promotion to certain political views would then be tantamount to walking into the Guardian editor-in-chief’s office and requiring her to publish Nigel Farage. Thank God, I’m confident she wouldn’t. And thank our constitutional authors, we in the United States have (at least for now) a First Amendment that should forbid that. Rather than thinking of the net as a medium — and of what appears there as content — I urged the group (as I do to anyone who’ll read me here) to think of it instead as a mechanism for connections where conversation occurs. That public conversation, with new voices long ignored and finally heard, deserves protection. That is why I argue that the net is neither publisher nor utility but something new: the connection machine.

There were other interesting discussions in the meeting — for example, about whether to ban foreign interference in a nation’s elections and political discussion. That idea unraveled under examination as that could also prevent organizing international campaigns for, say, climate reform or democracy. There was also much discussion about the burden regulation puts on small companies — or larger companies in smaller countries — raising the barrier to entry and making big companies, which have the lawyers and technologists needed to deal with regulation, only bigger and more powerful.

Principles for legislation

It is critical that any discussion of legislative efforts begin at the level of principles rather than as a response to momentary panic or political point-scoring (in Europe, pols apparently think they can score votes by levying big fines on large, American companies; in America, certain pols are hoping to score by promising — without much reason, in my opinion — to break successful companies up).

According to many in the working group meeting, the best place to begin is with the Universal Declaration of Human Rights, namely:

Article 19.
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

Article 20.
(1) Everyone has the right to freedom of peaceful assembly and association.
(2) No one may be compelled to belong to an association.

It is no easy matter to decide on other principles that should inform legislation. “Fake news” laws say that platforms must eradicate misinformation and disinformation. Truth is a terrible standard, for no one — especially not the platforms — wants to be its arbiter and making government that arbiter is a fast track to authoritarianism (see: China). Discerning truth from falsehood is a goal of the public conversation and it needs to flow freely if bumptiously to do that.

Civility seems an appealing standard but it is also troubling. To quote Joan Wallach Scott in the just-published Knowledge, Power, and Academic Freedom: “The long history of the notion of civility shows that it has everything to do with class, race, and power.” She quotes Michael Meranze arguing that “ultimately the call for civility is a demand that you not express anger; and if it was enforced it would suggest there is nothing to be angry about in the world.” Enforcement of civility also has a clear impact on freedom of expression. “Hence the English laws regulating language extended protection only to the person harmed by another’s words, never to the speaker,” explains Debora Shuger in Censorship & Cultural Sensibility: The Regulation of Language in Tudor-Stuart England. When I spoke with Yascha Mounk on his podcastabout this question, he urged holding onto civility, for he said one can call a nazi a nazi and still be civil. Germany’s NetzDG leans toward enforcement of civility by requiring platforms to take down not only hate speech but also “defamation or insult.” (Google reported 52,000 such complaints and took down 12,000 items as a result.) But again, sometimes, insult is warranted. I say that civility and civilization cannot be legislated.

Harm would be a decent standard if it were well-researched and clearly defined. But it has not been.

Anonymity is a dangerous standard, for requiring verified identity endangers the vulnerable in society, gives a tool of oppression to autocratic regimes, and is a risk to privacy.

Of course, there are many other working groups and many other convenings hashing over just these issues and well they should. The more we have open discussion with input from the public and not just lobbyists, the less likely that we will face more abominations like Articles 11+13. This report by Chris Middleton from the Westminster eForum presents more useful guidelines for making guidelines. For example, Daniel Dyball, UK executive director of the Internet Association,

proposed his own six-point wishlist for regulation: It should be targeted at specific harms using a risk based approach, he said; it should provide flexibility to adapt to changing technologies, services, and societal expectations; it should maintain the intermediary liability protections that enable the internet to deliver benefits to consumers, society, and the economy; it should be technically possible to implement in practice; it should provide clarity and certainty for consumers, citizens, and internet companies; and finally, it should recognise the distinction between public and private communications — an issue made more difficult by Facebook….

Middleton also quotes Victoria Nash of the Oxford Internet Institute, who argued for human rights as the guiding principle of any regulation and for a commitment to safe harbor to enable companies to take risks taken in good faith. “Well-balanced immunity or safe harbor are vital if we want responsible corporate behavior,” she said. She argued for minimizing the judgments companies must make in ruling on content. “Nash said she would prefer judgments that concentrate on illegal rather than ‘legal but harmful’ content.” She said that laws should encourage due process over haste. And she said systems should hold both companies and governments to account, adding: “I don’t have the belief that government will always act in the public interest.” Amen. Cue John Perry Barlow.

All of which is to say that regulating the internet is not and should not be easy. The implications and risks to innovation and ultimately democracy are huge. We must hold government to account for careful deliberation on well-researched evidence, for writing legislation with clearly enforceable standards, and for enforcing those laws.

Principles for company covenants

Proposing the covenants internet companies should make with their users, the public at large, and government — and what behavior they demand from and will enforce with users — could be a useful way to hold a discussion about what we expect from platforms.

Do we expect them to eliminate misinformation, incivility, or anonymity? See my discussion above. Then how about a safe space free of hatred? But we all hate nazis and should be free to say so. See Yascha Mounk’s argument. Then how about banning bigots? There’s a good start. But who’s a bigot? It took the platforms some time to decide that Alex Jones was one. They did so only after the public was so outraged by his behavior that companies were given cover to delete him. What happens in such cases, as I argue in this piece for The Atlantic, is that standards become emergent, bottom-up, after-the-fact, and unenforceable until after the invasion.

I urge you to read Facebook’s community standards. They’re actually pretty good and yet they certainly don’t solve the problem. Twitter has rules against abusive behavior but I constantly see complaints they are not adequately enforced.

This, I think, is why Mark Zuckerberg cried uncle and outright asked for regulation and enforcement from government. Government couldn’t figure out how to handle problems online so it outsourced its job to the companies. Now the companies want to send the task back to government. In response to Zuckerberg’s op-ed, Republican FCC Commissioner Brendan Carr pushed back again: “Facebook says it’s taking heat for the mistakes it makes in moderating content. So it calls for the government to police your speech for it. Outsourcing censorship to the government is not just a bad idea, it would violate the First Amendment. I’m a no.” Well, except if it’s government that forces Facebook to take action against speech then that is tantamount to government interference in speech and a violation of the First Amendment. The real problem is the quasi-legal nature of this fight: governments in Europe and the U.S. are ordering platforms to get rid of “harmful” speech without defining harm in law and without due process. It’s a game of hot potato and the potato is in midair. [Disclosure: I raised money for my school from Facebook but we are independent of it and I receive no money personally from any platform. Through various holdings and mutual funds, I probably own stock in most major platforms.]

Zuckerberg urges a “more standardized approach” regarding harmful content as well as privacy and definitions of political advertising. I well understand his desire to find consistency. He said: “I also believe a common global framework — rather than regulation that varies significantly by country and state — will ensure that the Internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protection.”

But the internet is fractured and nations and cultures are different. A recent paper by Kieron O’Hara and Wendy Hall says the net is already split into four or five pieces: the open net of Silicon Valley, the commercial web of American business, the regulated “bourgeois internet” of Europe, the authoritarian internet of China, and the misinformation internet of Russia and North Korea.

I worry about Zuckerberg’s call for global regulation for I fear that the net will be run according to the lowest common denominator of freedom and the highest watermark of regulation.

None of this is easy and neither companies nor governments — nor us as the public — can shirk our duties to research, discern, debate, and decide on the kind of internet and society we want to build. This is a long, arduous process of trial and error and of negotiation of our new laws and norms. There’s no quick detour around it. That’s why I want to see frameworks that are designed to include a process of discussion and negotiation. That’s why I am warming to the structure I outlined above, which allows for community input into community standards and requires legislative consideration and judicial due process from government.

What I don’t want

I’ve been highly critical of much regulation to date and though I’ve written about that elsewhere, I will include my objections here, for context. In my view, attempts to regulate the net to date too often:

  1. Spring from moral panic over evidence (see Germany’s NetzDG hate-speech law);
  2. Are designed for protectionism over innovation (see Articles 11+13 of Europe’s horrendous new copyright law and it’s predecessors, Germany’s Leistungsschutzrecht or ancillary copyright and Spain’s link tax);
  3. Imperil freedom of expression and knowledge (see 11+13, the right to be forgotten, and the French and Singaporean fake news laws, which make platforms deciders of truth);
  4. Are conceived under vague and unenforceable standards (see where the UK is headed against “harmful content” its Commons Report on Disinformation and “fake news”);
  5. Transfer government authority and obligations to corporations, which now act in private and without due process as legislature, police, judge, jury, jailer, and censor (see the right to be forgotten and NetzDG);
  6. Result in misallocation of societal resources (Facebook hired, by latest count, 30,000 — up from 20,000 — monitors looking for hate while America has fewer than 30,000 newspaper reporters looking for corruption);
  7. Fall prey to the law of unintended consequences: making companies more responsible makes them more powerful (see GDPR and many of the rest).

And newly proposed regulation gets even worse with recent suggestions to require government permits for live streaming or to mandate that platforms vet everything that’s posted.

If this legislative juggernaut — and the moral panic that fuels it — are not slowed, I fear for the future of the net. That is why I think it is important to discuss regulatory regimes that will first do no harm.