This blog post summarizes the findings of a paper published in Volume 21 of the Federalist Society Review. The paper was co-authored by Dirk Auer, Geoffrey A. Manne, Julian Morris, & Kristian Stout. It uses the analytical framework of law and economics to discuss recent patent law reforms in the US, and their negative ramifications for inventors. The full paper can be found on the Federalist Society’s website, here.

Property rights are a pillar of the free market. As Harold Demsetz famously argued, they spur specialization, investment and competition throughout the economy. And the same holds true for intellectual property rights (IPRs). 

However, despite the many social benefits that have been attributed to intellectual property protection, the past decades have witnessed the birth and growth of an powerful intellectual movement seeking to reduce the legal protections offered to inventors by patent law.

These critics argue that excessive patent protection is holding back western economies. For instance, they posit that the owners of the standard essential patents (“SEPs”) are charging their commercial partners too much for the rights to use their patents (this is referred to as patent holdup and royalty stacking). Furthermore, they argue that so-called patent trolls (“patent-assertion entities” or “PAEs”) are deterring innovation by small startups by employing “extortionate” litigation tactics.

Unfortunately, this movement has led to a deterioration of appropriate remedies in patent disputes.

The many benefits of patent protection

While patents likely play an important role in providing inventors with incentives to innovate, their role in enabling the commercialization of ideas is probably even more important.

By creating a system of clearly defined property rights, patents empower market players to coordinate their efforts in order to collectively produce innovations. In other words, patents greatly reduce the cost of concluding mutually-advantageous deals, whereby firms specialize in various aspects of the innovation process. Critically, these deals occur in the shadow of patent litigation and injunctive relief. The threat of these ensures that all parties have an incentive to take a seat at the negotiating table.

This is arguably nowhere more apparent than in the standardization space. Many of the most high-profile modern technologies are the fruit of large-scale collaboration coordinated through standards developing organizations (SDOs). These include technologies such as Wi-Fi, 3G, 4G, 5G, Blu-Ray, USB-C, and Thunderbolt 3. The coordination necessary to produce technologies of this sort is hard to imagine without some form of enforceable property right in the resulting inventions.

The shift away from injunctive relief

Of the many recent reforms to patent law, the most significant has arguably been a significant limitation of patent holders’ availability to obtain permanent injunctions. This is particularly true in the case of so-called standard essential patents (SEPs). 

However, intellectual property laws are meaningless without the ability to enforce them and remedy breaches. And injunctions are almost certainly the most powerful, and important, of these remedies.

The significance of injunctions is perhaps best understood by highlighting the weakness of damages awards when applied to intangible assets. Indeed, it is often difficult to establish the appropriate size of an award of damages when intangible property—such as invention and innovation in the case of patents—is the core property being protected. This is because these assets are almost always highly idiosyncratic. By blocking all infringing uses of an invention, injunctions thus prevent courts from having to act as price regulators. In doing so, they also ensure that innovators are adequately rewarded for their technological contributions.

Unfortunately, the Supreme Court’s 2006 ruling in eBay Inc. v. MercExchange, LLC significantly narrowed the circumstances under which patent holders could obtain permanent injunctions. This predictably led lower courts to grant fewer permanent injunctions in patent litigation suits. 

But while critics of injunctions had hoped that reducing their availability would spur innovation, empirical evidence suggests that this has not been the case so far. 

Other reforms

And injunctions are not the only area of patent law that have witnessed a gradual shift against the interests of patent holders. Much of the same could be said about damages awards, revised fee shifting standards, and the introduction of Inter Partes Review.

Critically, the intellectual movement to soften patent protection has also had ramifications outside of the judicial sphere. It is notably behind several legislative reforms, particularly the America Invents Act. Moreover, it has led numerous private parties – most notably Standard Developing Organizations (SDOs) – to adopt stances that have advanced the interests of technology implementers at the expense of inventors.

For instance, one of the most noteworthy reforms has been IEEE’s sweeping reforms to its IP policy, in 2015. The new rules notably prevented SEP holders from seeking permanent injunctions against so-called “willing licensees”. They also mandated that royalties pertaining to SEPs should be based upon the value of the smallest saleable component that practices the patented technology. Both of these measures ultimately sought to tilt the bargaining range in license negotiations in favor of implementers.

Concluding remarks

The developments discussed in this article might seem like small details, but they are part of a wider trend whereby U.S. patent law is becoming increasingly inhospitable for inventors. This is particularly true when it comes to the enforcement of SEPs by means of injunction.

While the short-term effect of these various reforms has yet to be quantified, there is a real risk that, by decreasing the value of patents and increasing transaction costs, these changes may ultimately limit the diffusion of innovations and harm incentives to invent.

This likely explains why some legislators have recently put forward bills that seek to reinforce the U.S. patent system (here and here).

Despite these initiatives, the fact remains that there is today a strong undercurrent pushing for weaker or less certain patent protection. If left unchecked, this threatens to undermine the utility of patents in facilitating the efficient allocation of resources for innovation and its commercialization. Policymakers should thus pay careful attention to the changes this trend may bring about and move swiftly to recalibrate the patent system where needed in order to better protect the property rights of inventors and yield more innovation overall.

As Thomas Sowell has noted many times, political debates often involve the use of words which if taken literally mean something very different than the connotations which are conveyed. Examples abound in the debate about broadband buildout. 

There is a general consensus on the need to subsidize aspects of broadband buildout to rural areas in order to close the digital divide. But this real need allows for strategic obfuscation of key terms in this debate by parties hoping to achieve political or competitive gain. 

“Access” and “high-speed broadband”

For instance, nearly everyone would agree that Internet policy should “promote access to high-speed broadband.” But how some academics and activists define “access” and “high-speed broadband” are much different than the average American would expect.

A commonsense definition of access is that consumers have the ability to buy broadband sufficient to meet their needs, considering the costs and benefits they face. In the context of the digital divide between rural and urban areas, the different options available to consumers in each area is a reflection of the very real costs and other challenges of providing service. In rural areas with low population density, it costs broadband providers considerably more per potential subscriber to build the infrastructure needed to provide service. At some point, depending on the technology, it is no longer profitable to build out to the next customer several miles down the road. The options and prices available to rural consumers reflects this unavoidable fact. Holding price constant, there is no doubt that many rural consumers would prefer higher speeds than are currently available to them. But this is not the real-world choice which presents itself. 

But access in this debate instead means the availability of the same broadband options regardless of where people live. Rather than being seen as a reflection of underlying economic realities, the fact that rural Americans do not have the same options available to them that urban Americans do is seen as a problem which calls out for a political solution. Thus, billions of dollars are spent in an attempt to “close the digital divide” by subsidizing broadband providers to build infrastructure to  rural areas. 

“High-speed broadband” similarly has a meaning in this debate significantly different from what many consumers, especially those lacking “high speed” service, expect. For consumers, fast enough is what allows them to use the Internet in the ways they desire. What is fast enough does change over time as more and more uses for the Internet become common. This is why the FCC has changed the technical definition of broadband multiple times over the years as usage patterns and bandwidth requirements change. Currently, the FCC uses 25Mbps down/3 Mbps up as the baseline for broadband.

However, for some, like Jonathan Sallet, this is thoroughly insufficient. In his Broadband for America’s Future: A Vision for the 2020s, he instead proposes “100 Mbps symmetrical service without usage limits.” The disconnect between consumer demand as measured in the marketplace in light of real trade-offs between cost and performance and this arbitrary number is not well-explained in this study. The assumption is simply that faster is better, and that the building of faster networks is a mere engineering issue once sufficiently funded and executed with enough political will.

But there is little evidence that consumers “need” faster Internet than the market is currently providing. In fact, one Wall Street Journal study suggests “typical U.S. households don’t use most of their bandwidth while streaming and get marginal gains from upgrading speeds.” Moreover, there is even less evidence that most consumers or businesses need anything close to upload speeds of 100 Mbps. For even intensive uses like high-resolution live streaming, recommended upload speeds still fall far short of 100 Mbps. 

“Competition” and “Overbuilding”

Similarly, no one objects to the importance of “competition in the broadband marketplace.” But what is meant by this term is subject to vastly different interpretations.

The number of competitors is not the same as the amount of competition. Competition is a process by which market participants discover the best way to serve consumers at lowest cost. Specific markets are often subject to competition not only from the firms which exist within those markets, but also from potential competitors who may enter the market any time potential profits reach a point high enough to justify the costs of entry. An important inference from this is that temporary monopolies, in the sense that one firm has a significant share of the market, is not in itself illegal under antitrust law, even if they are charging monopoly prices. Potential entry is as real in its effects as actual competitors in forcing incumbents to continue to innovate and provide value to consumers. 

However, many assume the best way to encourage competition in broadband buildout is to simply promote more competitors. A significant portion of Broadband for America’s Future emphasizes the importance of subsidizing new competition in order to increase buildout, increase quality, and bring down prices. In particular, Sallet emphasizes the benefits of municipal broadband, i.e. when local governments build and run their own networks. 

In fact, Sallet argues that fears of “overbuilding” are really just fears of competition by incumbent broadband ISPs:

Language here is important. There is a tendency to call the construction of new, competitive networks in a locality with an existing network “overbuilding”—as if it were an unnecessary thing, a useless piece of engineering. But what some call “overbuilding” should be called by a more familiar term: “Competition.” “Overbuilding” is an engineering concept; “competition” is an economic concept that helps consumers because it shifts the focus from counting broadband networks to counting the dollars that consumers save when they have competitive choices. The difference is fundamental—overbuilding asks whether the dollars spent to build another network are necessary for the delivery of a communications service; economics asks whether spending those dollars will lead to competition that allows consumers to spend less and get more. 

Sallet makes two rhetorical moves here to make his argument. 

The first is redefining “overbuilding,” which refers to literally building a new network on top of (that is, “over”) previously built architecture, as a ploy by ISPs to avoid competition. But this is truly Orwellian. When a new entrant can build over an incumbent and take advantage of the first-mover’s investments to enter at a lower cost, a failure to compensate the first-mover is free riding. If the government compels such free riding, it reduces incentives for firms to make the initial investment to build the infrastructure.

The second is defining competition as the number of competitors, even if those competitors need to be subsidized by the government in order to enter the marketplace.  

But there is no way to determine the “right” number of competitors in a given market in advance. In the real world, markets don’t match blackboard descriptions of perfect competition. In fact, there are sometimes high fixed costs which limit the number of firms which will likely exist in a competitive market. In some markets, known as natural monopolies, high infrastructural costs and other barriers to entry relative to the size of the market lead to a situation where it is cheaper for a monopoly to provide a good or service than multiple firms in a market. But it is important to note that only firms operating under market pressures can assess the viability of competition. This is why there is a significant risk in government subsidizing entry. 

Competition drives sustained investment in the capital-intensive architecture of broadband networks, which suggests that ISPs are not natural monopolies. If they were, then having a monopoly provider regulated by the government to ensure the public interest, or government-run broadband companies, may make sense. In fact, Sallet denies ISPs are natural monopolies, stating that “the history of telecommunications regulation in the United States suggests that monopolies were a result of policy choices, not mandated by any iron law of economics” and “it would be odd for public policy to treat the creation of a monopoly as a success.” 

As noted by economist George Ford in his study, The Impact of Government-Owned Broadband Networks on Private Investment and Consumer Welfare, unlike the threat of entry which often causes incumbents to act competitively even in the absence of competitors, the threat of subsidized entry reduces incentives for private entities to invest in those markets altogether. This includes both the incentive to build the network and update it. Subsidized entry may, in fact, tip the scales from competition that promotes consumer welfare to that which could harm it. If the market only profitably sustains one or two competitors, adding another through municipal broadband or subsidizing a new entrant may reduce the profitability of the incumbent(s) and eventually lead to exit. When this happens, only the government-run or subsidized network may survive because the subsidized entrant is shielded from the market test of profit-and-loss.

The “Donut Hole” Problem

The term “donut hole” is a final example to consider of how words can be used to confuse rather than enlighten in this debate.

There is broad agreement that to generate the positive externalities from universal service, there needs to be subsidies for buildout to high-cost rural areas. However, this seeming agreement masks vastly different approaches. 

For instance, some critics of the current subsidy approach have identified a phenomenon where the city center has multiple competitive ISPs and government policy extends subsidies to ISPs to build out broadband coverage into rural areas, but there is relatively paltry Internet services in between due to a lack of private or public investment. They describe this as a “donut hole” because the “unserved” rural areas receive subsidies while “underserved” outlying parts immediately surrounding town centers receive nothing under current policy.

Conceptually, this is not a donut hole. It is actually more like a target or bullseye, where the city center is served by private investment and the rural areas receive subsidies to be served. 

Indeed, there is a different use of the term donut hole, which describes how public investment in city centers can create a donut hole of funding needed to support rural build-out. Most Internet providers rely on profits from providing lower-cost service to higher-population areas (like city centers) to cross-subsidize the higher cost of providing service in outlying and rural areas. But municipal providers generally only provide municipal service — they only provide lower-cost service. This hits the carriers that serve higher-cost areas with a double whammy. First, every customer that municipal providers take from private carriers cuts the revenue that those carriers rely on to provide service elsewhere. Second, and even more problematic, because the municipal providers have lower costs (because they tend not to serve the higher-cost outlying areas), they can offer lower prices for service. This “competition” exerts downward pressure on the private firms’ prices, further reducing revenue across their entire in-town customer base. 

This version of the “donut hole,” in which the revenues that private firms rely on from the city center to support the costs of providing service to outlying areas has two simultaneous effects. First, it directly reduces the funding available to serve more rural areas. And, second, it increases the average cost of providing service across its network (because it is no longer recovering as much of its costs from the lower-cost city core), which increases the prices that need to be charged to rural users in order to justify offering service at all.

Conclusion

Overcoming the problem of the rural digital divide starts with understanding why it exists. It is simply more expensive to build networks in areas with low population density. If universal service is the goal, subsidies, whether explicit subsidies from government or implicit cross-subsidies by broadband companies, are necessary to build out to these areas. But obfuscations about increasing “access to high-speed broadband” by promoting “competition” shouldn’t control the debate.

Instead, there needs to be a nuanced understanding of how government-subsidized entry into the broadband marketplace can discourage private investment and grow the size of the “donut hole,” thereby leading to demand for even greater subsidies. Policymakers should avoid exacerbating the digital divide by prioritizing subsidized competition over market processes.

During last week’s antitrust hearing, Representative Jamie Raskin (D-Md.) provided a sound bite that served as a salvo: “In the 19th century we had the robber barons, in the 21st century we get the cyber barons.” But with sound bites, much like bumper stickers, there’s no room for nuance or scrutiny.

The news media has extensively covered the “questioning” of the CEOs of Facebook, Google, Apple, and Amazon (collectively “Big Tech”). Of course, most of this questioning was actually political posturing with little regard for the actual answers or antitrust law. But just like with the so-called robber barons, the story of Big Tech is much more interesting and complex. 

The myth of the robber barons: Market entrepreneurs vs. political entrepreneurs

The Robber Barons: The Great American Capitalists, 1861–1901 (1934) by Matthew Josephson, was written in the midst of America’s Great Depression. Josephson, a Marxist with sympathies for the Soviet Union, made the case that the 19th century titans of industry were made rich on the backs of the poor during the industrial revolution. This idea that the rich are wealthy due to their robbing of the rest of us is an idea that has long outlived Josephson and Marx down to the present day, as exemplified by the writings of Matt Stoller and the politics of the House Judiciary Committee.

In his Myth of the Robber Barons, Burton Folsom, Jr. makes the case that much of the received wisdom on the great 19th century businessmen is wrong. He distinguishes between the market entrepreneurs, which generated wealth by selling newer, better, or less expensive products on the free market without any government subsidies, and the political entrepreneurs, who became rich primarily by influencing the government to subsidize their businesses, or enacting legislation or regulation that harms their competitors. 

Folsom narrates the stories of market entrepreneurs, like Thomas Gibbons & Cornelius Vanderbilt (steamships), James Hill (railroads), the Scranton brothers (iron rails), Andrew Carnegie & Charles Schwab (steel), and John D. Rockefeller (oil), who created immense value for consumers by drastically reducing the prices of the goods and services their companies provided. Yes, these men got rich. But the value society received was arguably even greater. Wealth was created because market exchange is a positive-sum game.

On the other hand, the political entrepreneurs, like Robert Fulton & Edward Collins (steamships), and Leland Stanford & Henry Villard (railroads), drained societal resources by using taxpayer money to create inefficient monopolies. Because they were not subject to the same market discipline due to their favored position, cutting costs and prices were less important to them than the market entrepreneurs. Their wealth was at the expense of the rest of society, because political exchange is a zero-sum game.

Big Tech makes society better off

Today’s titans of industry, i.e. Big Tech, have created enormous value for society. This is almost impossible to deny, though some try. From zero-priced search on Google, to the convenience and price of products on Amazon, to the nominally free social network(s) of Facebook, to the plethora of options in Apple’s App Store, consumers have greatly benefited from Big Tech. Consumers flock to use Google, Facebook, Amazon, and Apple for a reason: they believe they are getting a great deal. 

By and large, the techlash comes from “intellectuals” who think they know better than consumers acting in the marketplace about what is good for them. And as noted by Alec Stapp, Americans in opinion polls consistently put a great deal of trust in Big Tech, at least compared to government institutions:

One of the basic building blocks of economics is that both parties benefit from voluntary exchanges ex ante, or else they would not be willing to engage in it. The fact that consumers use Big Tech to the extent they do is overwhelming evidence of their value. Obfuscations like “market power” mislead more than they inform. In the absence of governmental barriers to entry, consumers voluntarily choosing Big Tech does not mean they have power, it means they provide great service.

Big Tech companies are run by entrepreneurs who must ultimately answer to consumers. In a market economy, profits are a signal that entrepreneurs have successfully brought value to society. But they are also a signal to potential competitors. If Big Tech companies don’t continue to serve the interests of their consumers, they risk losing them to competitors.

Big Tech’s CEOs seem to get this. For instance, Jeff Bezos’ written testimony emphasized the importance of continual innovation at Amazon as a reason for its success:

Since our founding, we have strived to maintain a “Day One” mentality at the company. By that I mean approaching everything we do with the energy and entrepreneurial spirit of Day One. Even though Amazon is a large company, I have always believed that if we commit ourselves to maintaining a Day One mentality as a critical part of our DNA, we can have both the scope and capabilities of a large company and the spirit and heart of a small one. 

In my view, obsessive customer focus is by far the best way to achieve and maintain Day One vitality. Why? Because customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and a constant desire to delight customers drives us to constantly invent on their behalf. As a result, by focusing obsessively on customers, we are internally driven to improve our services, add benefits and features, invent new products, lower prices, and speed up shipping times—before we have to. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it. And I could give you many such examples. Not every business takes this customer-first approach, but we do, and it’s our greatest strength.

The economics of multi-sided platforms: How Big Tech does it

Economically speaking, Big Tech companies are (mostly) multi-sided platforms. Multi-sided platforms differ from regular firms in that they have to serve two or more of these distinct types of consumers to generate demand from any of them.

Economist David Evans, who has done as much as any to help us understand multi-sided platforms, has identified three different types:

  1. Market-Makers enable members of distinct groups to transact with each other. Each member of a group values the service more highly if there are more members of the other group, thereby increasing the likelihood of a match and reducing the time it takes to find an acceptable match. (Amazon and Apple’s App Store)
  2. Audience-Makers match advertisers to audiences. Advertisers value a service more if there are more members of an audience who will react positively to their messages; audiences value a service more if there is more useful “content” provided by audience-makers. (Google, especially through YouTube, and Facebook, especially through Instagram)
  3. Demand-Coordinators make goods and services that generate indirect network effects across two or more groups. These platforms do not strictly sell “transactions” like a market maker or “messages” like an audience-maker; they are a residual category much like irregular verbs – numerous, heterogeneous, and important. Software platforms such as Windows and the Palm OS, payment systems such as credit cards, and mobile telephones are demand coordinators. (Android, iOS)

In order to bring value, Big Tech has to consider consumers on all sides of the platform they operate. Sometimes, this means consumers on one side of the platform subsidize the other. 

For instance, Google doesn’t charge its users to use its search engine, YouTube, or Gmail. Instead, companies pay Google to advertise to their users. Similarly, Facebook doesn’t charge the users of its social network, advertisers on the other side of the platform subsidize them. 

As their competitors and critics love to point out, there are some complications in that some platforms also compete in the markets they create. For instance, Apple does place its own apps inits App Store, and Amazon does engage in some first-party sales on its platform. But generally speaking, both Apple and Amazon act as matchmakers for exchanges between users and third parties.

The difficulty for multi-sided platforms is that they need to balance the interests of each part of the platform in a way that maximizes its value. 

For Google and Facebook, they need to balance the interests of users and advertisers. In the case of each, this means a free service for users that is subsidized by the advertisers. But the advertisers gain a lot of value by tailoring ads based upon search history, browsing history, and likes and shares. For Apple and Amazon they need to create platforms which are valuable for buyers and sellers, and balance how much first-party competition they want to have before they lose the benefits of third-party sales.

There are no easy answers to creating a search engine, a video service, a social network, an App store, or an online marketplace. Everything from moderation practices, to pricing on each side of the platform, to the degree of competition from the platform operators themselves needs to be balanced right or these platforms would lose participants on one side of the platform or the other to competitors. 

Conclusion

Representative Raskin’s “cyber barons” were raked through the mud by Congress. But much like the falsely identified robber barons of the 19th century who were truly market entrepreneurs, the Big Tech companies of today are wrongfully maligned.

No one is forcing consumers to use these platforms. The incredible benefits they have brought to society through market processes shows they are not robbing anyone. Instead, they are constantly innovating and attempting to strike a balance between consumers on each side of their platform. 

The myth of the cyber barons need not live on any longer than last week’s farcical antitrust hearing.

This guest post is by Corbin K. Barthold, Senior Litigation Counsel at Washington Legal Foundation.

A boy throws a brick through a bakeshop window. He flees and is never identified. The townspeople gather around the broken glass. “Well,” one of them says to the furious baker, “at least this will generate some business for the windowmaker!”

A reasonable statement? Not really. Although it is indeed a good day for the windowmaker, the money for the new window comes from the baker. Perhaps the baker was planning to use that money to buy a new suit. Now, instead of owning a window and a suit, he owns only a window. The windowmaker’s gain, meanwhile, is simply the tailor’s loss.

This parable of the broken window was conceived by Frédéric Bastiat, a nineteenth-century French economist. He wanted to alert the reader to the importance of opportunity costs—in his words, “that which is not seen.” Time and money spent on one activity cannot be spent on another.

Today Bastiat might tell the parable of the harassed technology company. A tech firm creates a revolutionary new product or service and grows very large. Rivals, lawyers, activists, and politicians call for an antitrust probe. Eventually they get their way. Millions of documents are produced, dozens of depositions are taken, and several hearings are held. In the end no concrete action is taken. “Well,” the critics say, “at least other companies could grow while the firm was sidetracked by the investigation!”

Consider the antitrust case against Microsoft twenty years ago. The case ultimately settled, and Microsoft agreed merely to modify minor aspects of how it sold its products. “It’s worth wondering,” writes Brian McCullough, a generally astute historian of the internet, “how much the flowering of the dot-com era was enabled by the fact that the most dominant, rapacious player in the industry was distracted while the new era was taking shape.” “It’s easy to see,” McCullough says, “that the antitrust trial hobbled Microsoft strategically, and maybe even creatively.”

Should we really be glad that an antitrust dispute “distracted” and “hobbled” Microsoft? What would a focused and unfettered Microsoft have achieved? Maybe nothing; incumbents often grow complacent. Then again, Microsoft might have developed a great search engine or social-media platform. Or it might have invented something that, thanks to the lawsuit, remains absent to this day. What Microsoft would have created in the early 2000s, had it not had to fight the government, is that which is not seen.

But doesn’t obstructing the most successful companies create “room” for new competitors? David Cicilline, the chairman of the House’s antitrust subcommittee, argues that “just pursuing the [Microsoft] enforcement action itself” made “space for an enormous amount of additional innovation and competition.” He contends that the large tech firms seek to buy promising startups before they become full-grown threats, and that such purchases must be blocked.

It’s easy stuff to say. It’s not at all clear that it’s true or that it makes sense. Hindsight bias is rampant. In 2012, for example, Facebook bought Instagram for $1 billion, a purchase that is now cited as a quintessential “killer acquisition.” At the time of the sale, however, Instagram had 27 million users and $0 in revenue. Today it has around a billion users, it is estimated to generate $7 billion in revenue each quarter, and it is worth perhaps $100 billion. It is presumptuous to declare that Instagram, which had only 13 employees in 2012, could have achieved this success on its own.

If distraction is an end in itself, last week’s Big Tech hearing before Cicilline and his subcommittee was a smashing success. Presumably Jeff Bezos, Tim Cook, Sundar Pichai, and Mark Zuckerberg would like to spend the balance of their time developing the next big innovations and staying ahead of smart, capable, ruthless competitors, starting with each other and including foreign firms such as ByteDance and Huawei. Last week they had to put their aspirations aside to prepare for and attend five hours of political theater.

The most common form of exchange at the hearing ran as follows. A representative asks a slanted question. The witness begins to articulate a response. The representative cuts the witness off. The representative gives a prepared speech about how the witness’s answer proved her point.

Lucy Kay McBath, a first-term congresswoman from Georgia, began one such drill with the claim that Facebook’s privacy policy from 2004, when Zuckerberg was 20 and Facebook had under a million users, applies in perpetuity. “We do not and will not use cookies to collect private information from any users,” it said. Has Facebook broken its “promise,” McBath asked, not to use cookies to collect private information? No, Zuckerberg explained (letting the question’s shaky premise slide), Facebook uses only standard log-in cookies.

“So once again, you do not use cookies? Yes or no?” McBath interjected. Having now asked a completely different question, and gotten a response resembling what she wanted—“Yes, we use cookies [on log-in features]”—McBath could launch into her canned condemnation. “The bottom line here,” she said, reading from her page, “is that you broke a commitment to your users. And who can say whether you may or may not do that again in the future?” The representative pressed on with her performance, not noticing or not caring that the person she was pretending to engage with had upset her script.

Many of the antitrust subcommittee’s queries had nothing to do with antitrust. One representative fixated on Amazon’s ties with the Southern Poverty Law Center. Another seemed to want Facebook to interrogate job applicants about their political beliefs. A third asked Zuckerberg to answer for the conduct of Twitter. One representative demanded that social-media posts about unproven Covid-19 treatments be left up, another that they be taken down. Most of the questions that were at least vaguely on topic, meanwhile, were exceedingly weak. The representatives often mistook emails showing that tech CEOs play to win, that they seek to outcompete challengers and rivals, for evidence of anticompetitive harm to consumers. And the panel was often treated like a customer-service hotline. This app developer ran into a difficulty; what say you, Mr. Cook? That third-party seller has a gripe; why won’t you listen to her, Mr. Bezos?

In his opening remarks, Bezos cited a survey that ranked Amazon one of the country’s most trusted institutions. No surprise there. In many places one could have ordered a grocery delivery from Amazon as the hearing started and had the goods put away before it ended. Was Bezos taking a muted dig at Congress? He had every right to—it is one of America’s least trusted institutions. Pichai, for his part, noted that many users would be willing to pay thousands of dollars a year for Google’s free products. Is Congress providing people that kind of value?

The advance of technology will never be an unalloyed blessing. There are legitimate concerns, for instance, about how social-media platforms affect public discourse. “Human beings evolved to gossip, preen, manipulate, and ostracize,” psychologist Jonathan Haidt and technologist Tobias Rose-Stockwell observe. Social media exploits these tendencies, they contend, by rewarding those who trade in the glib put-down, the smug pronouncement, the theatrical smear. Speakers become “cruel and shallow”; “nuance and truth” become “casualties in [a] competition to gain the approval of [an] audience.”

Three things are true at once. First, Haidt and Rose-Stockwell have a point. Second, their point goes only so far. Social media does not force people to behave badly. Assuming otherwise lets individual humans off too easy. Indeed, it deprives them of agency. If you think it is within your power to display grace, love, and transcendence, you owe it to others to think it is within their power as well.

Third, if you really want to see adults act like children, watch a high-profile congressional hearing. A hearing for Attorney General William Barr, held the day before the Big Tech hearing and attended by many of the same representatives, was a classic of the format.

The tech hearing was not as shambolic as the Barr hearing. And the representatives act like sanctimonious halfwits in part to concoct the sick burns that attract clicks on the very platforms built, facilitated, and delivered by the tech companies. For these and other obvious reasons, no one should feel sorry for the four men who spent a Wednesday afternoon serving as props for demagogues. But that doesn’t mean the charade was a productive use of time. There is always that which is not seen.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Oscar Súmar, Dean of the Law School of the Scientific University of the South (Peru)).]

Peru’s response to the pandemic has been one of the most radical in Latin America: Restrictions were imposed sooner, lasted longer and were among the strictest in the region. Peru went into lockdown on March 15 after only 71 cases had been reported.  Along with the usual restrictions (temporary restaurant and school closures), the Peruvian government took other measures such as bans on the use of private vehicles and the mandatory nightly curfews. For a time, there even were gender-based movement restrictions: men and women were allowed out on different days.

A few weeks into the lockdown, it became obvious that these measures were not flattening the curve of infections. But instead of reconsidering its strategy, the government insisted on the same path, with depressing results. Peru is one of the world’s worst hit countries by Covid-19, with 300k total cases by July 4th, 2020 and one of the countries with the highest “excess of deaths,” reaching 140%. Peru’s government has tried a rich country’s response, despite the fact that Peru lacks the institutions and wealth to make that possible.

The Peruvian response to coronavirus can be attributed to three factors. One, paternalism is popular in Peru and arguments for liberty are ignored. This is confirmed by the fact that President Vizcarra enjoys to this day a great deal of popularity thanks to this draconian lockdown even when the government has repeatedly blamed people’s negligence as the main cause of contagion. Two, government officials have socialistic tendencies. For instance, the Prime Minister – Mr. Zeballos – used to speak freely about price regulations and nationalization, even before the pandemic. And three, Peru’s health system is one of the worst in the region. It was foreseeable that our health system would be overwhelmed in the first few weeks, so our government decided to go into early lockdown.

Peru has also launched one of the most aggressive economic relief programs in the world, equivalent to 12% of its GDP. This program included a “universal bond” for poor families, as well as a loan program for small, medium and large businesses. The program was praised by the media around the world. Despite this programme, Peru has been one of the worst-hit countries in the world in economic terms. The World Bank predicts that Peru will be the country with the biggest GDP contraction in the region.

If anything, Peru played the crisis by the book. But Peru´s lack of strong, legitimate and honest institutions have made its policies ineffectual. Just few months prior to the beginning of the pandemic, President Vizcarra dissolved the Congress. And Peru has been engulfed in a far-reaching corruption scandal for years. Only two years ago, former president Pedro Pablo Kuczynski resigned the presidency being directly implicated in the scandal, and his vice president at the time, Martin Vizcarra, took over. Much of Peru’s political and business elite have also been implicated in this scandal, with members of the elite summoned daily to the criminal prosecutor’s office for questioning.

However, if we want to understand the lack of strong institutions in Peru – and how this affected our response to the pandemic – we need to go back even further. In the 1980s, after having lived through a socialist military dictatorship, a young candidate named Alan Garcia was democratically elected as president. But during Garcia´s presidency, Peru achieved a trillion-dollar foreign debt, record levels of inflation, and imposed price controls and nationalizations. Peru fought a losing war against an armed Marxist terrorist group. By 1990, Peru was on the edge of the abyss. In the 1990 presidential campaign, Peruvians had to decide between a celebrated libertarian intellectual with little political experience, the novelist Mario Vargas Llosa, and Alberto Fujimori, a political “outsider” with rather unknown ideas but an aura of pragmatism over his head. We chose the latter.

Fujimori’s two main goals were to end domestic terrorism and to stabilize Peru’s ruined economy. This second task was achieved by following the Washington Consensus receipt: changing the Constitution after a self-inflicted coup d’état. The Consensus has been deemed as a “neoliberal” group of policies, but was really the product of a decades-long consensus among World Bank experts about policies that almost all mainstream economists favor. The policies included were privatization, deregulation, free trade, monetary stability, control over borrowing, and a focusing of public spending on health, education and infrastructure. A secondary part of the recommendations was aimed at institutional reform, poverty alleviation and the reform of tax and labor laws.

The implementation of the Consensus by Fujimori and subsequent governments was a mix of the actual “structural adjustments” recommended by the Bank and systemic over-regulation, mercantilism, and corruption. Every Peruvian president since 1990 is either currently being investigated or has been charged with corruption.

Although Peru’s GDP increased by more than 5% per year for several years since 1990, and poverty numbers have shrunk more than 50% in the last decade, other problems have remained. People have no access to decent healthcare; basic education in Peru is one of the worst in the world; and, more than half of the population does not have access to clean drinking water. Also, informality remains one of our biggest problems since the tax and labor reforms didn’t take place. Our tax base is very small, and our labor legislation is among the costliest in Latin America.

In Peruvian eyes, this is what “neoliberalism” looks like. Peru was good at implementing many of the high-level reforms, but not the detailed and complex institutional ones. The Consensus assumed the coexistence of free market institutions and measures of social assistance. Peru had some of these, but not enough. Even the reforms that did take place weren´t legitimate or part of our actual social consensus.

Taking advantage of people´s discontent, now, some leftist politicians, journalists, academics and activists want nothing more than to return to our previous interventionist Constitution and to socialism. Peruvian people are crying out for change. If the current situation is partially explained by our implementation of the Washington Consensus and that Consensus is deemed “neoliberal”, it´s no surprise that “change” is understood as going back to a more interventionist regime. Our current situation could be seen as the result of people demanding more government intervention, with the government and Congress simply meeting that demand, with no institutional framework to resist this.

The health crisis we are currently experiencing highlights the cost of Peru’s lack of strong institutions. Peru had one of the most ill-prepared public healthcare systems in the World at the beginning of the pandemic, with just 100 intensive care units. But there is virtually no private alternative, because that is so heavily regulated, and what exists is mostly the preserve of the elite. So, instead of working to improve the public system or promote more competition in the private sector, the government threatened clinics with a takeover.

The Peruvian government was unable to deliver policies that matched the real conditions of its population. We have, in effect, the lockdown of a rich country with few of the conditions that have allowed them to work. Inner-city poverty and a large informal economy (at an estimated 70% of Peru’s economy) made the lockdown a health and economic trap for the majority of the population (this study of Norma Loayza is very illustrative).

Incapable of facing the truth about Peru’s ability to withstand a lockdown, government officials relied on regulation to try to reshape reality to their wishes. The result is 20-40 pages of “protocols” to be fulfilled by small companies, completely ignored by the informal 70% of the economy. In some cases, these regulations were obvious examples of rent-seeking as well. For example, only firms with 1 million soles (approximately 300,000 USD) in sales in the past year and with at least three physical branches were allowed to do business online during the lockdown.

Even after the lockdown has been officially terminated since July 1st, the government must approve every industry in order to operate again. At the same time, our Congress has passed legislation prohibiting toll collection (even when is a contractual agreement); it has criminalized “hoarding” and restated “speculation” as a felony crime; and a proposal to freeze all financial debts. Some economic commentators argue that in Peru the “populist virus” is even worse than Covid-19. Peru’s failure in dealing with the virus must be understood in light of its long history of interventionist governments that have let economic sclerosis set in through overregulation and done little to build up the kinds of institutions that would allow a pandemic response that suits Peru to work. Our lack of strong institutions, confidence in the market economy, and human capital in the public sector has put us in an extremely fragile position to fight the virus.

Earlier this year the UK government announced it was adopting the main recommendations of the Furman Report into competition in digital markets and setting up a “Digital Markets Taskforce” to oversee those recommendations being put into practice. The Competition and Markets Authority’s digital advertising market study largely came to similar conclusions (indeed, in places it reads as if the CMA worked backwards from those conclusions).

The Furman Report recommended that the UK should overhaul its competition regime with some quite significant changes to regulate the conduct of large digital platforms and make it harder for them to acquire other companies. But, while the Report’s panel is accomplished and its tone is sober and even-handed, the evidence on which it is based does not justify the recommendations it makes.

Most of the citations in the Report are of news reports or simple reporting of data with no analysis, and there is very little discussion of the relevant academic literature in each area, even to give a summary of it. In some cases, evidence and logic are misused to justify intuitions that are just not supported by the facts.

Killer acquisitions

One particularly bad example is the report’s discussion of mergers in digital markets. The Report provides a single citation to support its proposals on the question of so-called “killer acquisitions” — acquisitions where incumbent firms acquire innovative startups to kill their rival product and avoid competing on the merits. The concern is that these mergers slip under the radar of current merger control either because the transaction is too small, or because the purchased firm is not yet in competition with the incumbent. But the paper the Report cites, by Colleen Cunningham, Florian Ederer and Song Ma, looks only at the pharmaceutical industry. 

The Furman Report says that “in the absence of any detailed analysis of the digital sector, these results can be roughly informative”. But there are several important differences between the drug markets the paper considers and the digital markets the Furman Report is focused on. 

The scenario described in the Cunningham, et al. paper is of a patent holder buying a direct competitor that has come up with a drug that emulates the patent holder’s drug without infringing on the patent. As the Cunningham, et al. paper demonstrates, decreases in development rates are a feature of acquisitions where the acquiring company holds a patent for a similar product that is far from expiry. The closer a patent is to expiry, the less likely an associated “killer” acquisition is. 

But tech typically doesn’t have the clear and predictable IP protections that would make such strategies reliable. The long and uncertain development and approval process involved in bringing a drug to market may also be a factor.

There are many more differences between tech acquisitions and the “killer acquisitions” in pharma that the Cunningham, et al. paper describes. SO-called “acqui-hires,” where a company is acquired in order to hire its workforce en masse, are common in tech and explicitly ruled out of being “killers” by this paper, for example: it is not harmful to overall innovation or output overall if a team is moved to a more productive project after an acquisition. And network effects, although sometimes troubling from a competition perspective, can also make mergers of platforms beneficial for users by growing the size of that platform (because, of course, one of the points of a network is its size).

The Cunningham, et al. paper estimates that 5.3% of pharma acquisitions are “killers”. While that may seem low, some might say it’s still 5.3% too much. However, it’s not obvious that a merger review authority could bring that number closer to zero without also rejecting more mergers that are good for consumers, making people worse off overall. Given the number of factors that are specific to pharma and that do not apply to tech, it is dubious whether the findings of this paper are useful to the Furman Report’s subject at all. Given how few acquisitions are found to be “killers” in pharma with all of these conditions present, it seems reasonable to assume that, even if this phenomenon does apply in some tech mergers, it is significantly rarer than the ~5.3% of mergers Cunningham, et al. find in pharma. As a result, the likelihood of erroneous condemnation of procompetitive mergers is significantly higher. 

In any case, there’s a fundamental disconnect between the “killer acquisitions” in the Cunningham, et al. paper and the tech acquisitions described as “killers” in the popular media. Neither Facebook’s acquisition of Instagram nor Google’s acquisition of Youtube, which FTC Commissioner Rohit Chopra recently highlighted, would count, because in neither case was the acquired company “killed.” Nor were any of the other commonly derided tech acquisitions — e.g., Facebook/Whatsapp, Google/Waze, Microsoft.LinkedIn, or Amazon/Whole Foods — “killers,” either. 

In all these high-profile cases the acquiring companies expanded the service and invested more in them. One may object that these services would have competed with their acquirers had they remained independent, but this is a totally different argument to the scenarios described in the Cunningham, et al. paper, where development of a new drug is shut down by the acquirer ostensibly to protect their existing product. It is thus extremely difficult to see how the Cunningham, et al. paper is even relevant to the digital platform context, let alone how it could justify a wholesale revision of the merger regime as applied to digital platforms.

A recent paper (published after the Furman Report) does attempt to survey acquisitions by Google, Amazon, Facebook, Microsoft, and Apple. Out of 175 acquisitions in the 2015-17 period the paper surveys, only one satisfies the Cunningham, et al. paper’s criteria for being a potentially “killer” acquisition — Facebook’s acquisition of a photo sharing app called Masquerade, which had raised just $1 million in funding before being acquired.

In lieu of any actual analysis of mergers in digital markets, the Report falls back on a puzzling logic:

To date, there have been no false positives in mergers involving the major digital platforms, for the simple reason that all of them have been permitted. Meanwhile, it is likely that some false negatives will have occurred during this time. This suggests that there has been underenforcement of digital mergers, both in the UK and globally. Remedying this underenforcement is not just a matter of greater focus by the enforcer, as it will also need to be assisted by legislative change.

This is very poor reasoning. It does not logically follow that the (presumed) existence of false negatives implies that there has been underenforcement, because overenforcement carries costs as well. Moreover, there are strong reasons to think that false positives in these markets are more costly than false negatives. A well-run court system might still fail to convict a few criminals because the cost of accidentally convicting an innocent person was so high.

The UK’s competition authority did commission an ex post review of six historical mergers in digital markets, including Facebook/Instagram and Google/Waze, two of the most controversial in the UK. Although it did suggest that the review process could have been done differently, it also highlighted efficiencies that arose from each, and did not conclude that any has led to consumer detriment.

Recommendations

The Report is vague about which mergers it considers to have been uncompetitive, and apart from the aforementioned text it does not really attempt to justify its recommendations around merger control. 

Despite this, the Report recommends a shift to a ‘balance of harms’ approach. Under the current regime, merger review focuses on the likelihood that a merger would reduce competition which, at least, gives clarity about the factors to be considered. A ‘balance of harms’ approach would require the potential scale (size) of the merged company to be considered as well. 

This could give basis for blocking any merger at all on ‘scale’ grounds. After all, if a photo editing app with a sharing timeline can grow into the world’s second largest social network, how could a competition authority say with any confidence that some other acquisition might not prevent the emergence of a new platform on a similar scale, however unlikely? This could provide a basis for blocking almost any acquisition by an incumbent firm, and make merger review an even more opaque and uncertain process than it currently is, potentially deterring efficiency-raising mergers or leading startups that would like to be acquired to set up and operate overseas instead (or not to be started up in the first place).

The treatment of mergers is just one example of the shallowness of the Report. In many other cases — the discussions of concentration and barriers to entry in digital markets, for example — big changes are recommended on the basis of a handful of papers or less. Intuition repeatedly trumps evidence and academic research.

The Report’s subject is incredibly broad, of course, and one might argue that such a limited, casual approach is inevitable. In this sense the Report may function perfectly well as an opening brief introducing the potential range of problems in the digital economy that a rational competition authority might consider addressing. But the complexity and uncertainty of the issues is no reason to eschew rigorous, detailed analysis before determining that a compelling case has been made. Adopting the Report’s assumptions — and in many cases that is the very most one can say of them — of harm and remedial recommendations on the limited bases it offers is sure to lead to erroneous enforcement of competition law in a way that would reduce, rather than enhance, consumer welfare.

In the face of an unprecedented surge of demand for bandwidth as Americans responded to COVID-19, the nation’s Internet infrastructure delivered for urban and rural users alike. In fact, since the crisis began in March, there has been no appreciable degradation in either the quality or availability of service. That success story is as much about the network’s robust technical capabilities as it is about the competitive environment that made the enormous private infrastructure investments to build the network possible.

Yet, in spite of that success, calls to blind ISP pricing models to the bandwidth demands of users by preventing firms from employing “usage-based billing” (UBB) have again resurfaced. Today those demands are arriving in two waves: first, in the context of a petition by Charter Communications to employ the practice as the conditions of its merger with Time Warner Cable become ripe for review; and second in the form of complaints about ISPs re-imposing UBB following an end to the voluntary temporary halting of the practice during the first months of the COVID-19 pandemic — a move that was an expansion by ISPs of the Keep Americans Connected Pledge championed by FCC Chairman Ajit Pai.

In particular, critics believe they have found clear evidence to support their repeated claims that UBB isn’t necessary for network management purposes as (they assert) ISPs have long claimed.  Devin Coldewey of TechCrunch, for example, recently asserted that:

caps are completely unnecessary, existing only as a way to squeeze more money from subscribers. Data caps just don’t matter any more…. Think about it: If the internet provider can even temporarily lift the data caps, then there is definitively enough capacity for the network to be used without those caps. If there’s enough capacity, then why did the caps exist in the first place? Answer: Because they make money.

The thing is, though, ISPs did not claim that UBB was about the day-to-day “manage[ment of] network loads.” Indeed, the network management strawman has taken on a life of its own. It turns out that if you follow the thread of articles in an attempt to substantiate the claim (for instance: here, to here, to here, to here), it is just a long line of critics citing to each other’s criticisms of this purported claim by ISPs. But never do they cite to the ISPs themselves making this assertion — only to instances where ISPs offer completely different explanations, coupled with the critics’ claims that such examples show only that ISPs are now changing their tune. In reality, the imposition of usage-based billing is, and has always been, a basic business decision — as it is for every other company that uses it (which is to say: virtually all companies).

What’s UBB really about?

For critics, however, UBB is never just a “basic business decision.” Rather, the only conceivable explanations for UBB are network management and extraction of money. There is no room in this conception of the practice for perfectly straightforward pricing decisions that offer pricing that differs by customers’ usage of the services. Nor does this viewpoint recognize the importance of these pricing practices for long-term network cultivation in the form of investment in increasing capacity to meet the increased demands generated by users.

But to disregard these actual reasons for the use of UBB is to ignore what is economically self-evident.

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

A flat-rate pricing mandate wouldn’t allow pricing structures based on cost recovery. In such a world an ISP couldn’t simply offer a lower price to lighter users for a basic tier and rely on higher revenues from the heaviest users to cover the costs of network investment. Instead, it would have to finance its ability to improve its network to meet the needs of the most demanding users out of higher prices charged to all users, including the least demanding users that make up the vast majority of users on networks today (for example, according to Comcast, 95 percent of its  subscribers use less than 1.2 TB of data monthly).

On this basis, UBB is a sensible (and equitable, as some ISPs note) way to share the cost of building, maintaining, and upgrading the nation’s networks that simultaneously allows ISPs to react to demand changes in the market while enabling consumers to purchase a tier of service commensurate with their level of use. Indeed, charging customers based on the quality and/or amount of a product they use is a benign, even progressive, practice that insulates the majority of consumers from the obligation to cross-subsidize the most demanding customers.

Objections to the use of UBB fall generally into two categories. One stems from the sort of baseline policy misapprehension that it is needed to manage the network, but that fallacy is dispelled above. The other is borne of a simple lack of familiarity with the practice.

Consider that, in the context of Internet services, broadband customers are accustomed to the notion that access to greater data speed is more costly than the alternative, but are underexposed to the related notion of charging based upon broadband data consumption. Below, we’ll discuss the prevalence of UBB across sectors, how it works in the context of broadband Internet service, and the ultimate benefit associated with allowing for a diversity of pricing models among ISPs.

Usage-based pricing in other sectors

To nobody’s surprise, usage-based pricing is common across all sectors of the economy. Anything you buy by the unit, or by weight, is subject to “usage-based pricing.” Thus, this is how we buy apples from the grocery store and gasoline for our cars.

Usage-based pricing need not always be so linear, either. In the tech sector, for instance, when you hop in a ride-sharing service like Uber or Lyft, you’re charged a base fare, plus a rate that varies according to the distance of your trip. By the same token, cloud storage services like Dropbox and Box operate under a “freemium” model in which a basic amount of storage and services is offered for free, while access to higher storage tiers and enhanced services costs increasingly more. In each case the customer is effectively responsible (at least in part) for supporting the service to the extent of her use of its infrastructure.

Even in sectors in which virtually all consumers are obligated to purchase products and where regulatory scrutiny is profound — as is the case with utilities and insurance — non-linear and usage-based pricing are still common. That’s because customers who use more electricity or who drive their vehicles more use a larger fraction of shared infrastructure, whether physical conduits or a risk-sharing platform. The regulators of these sectors recognize that tremendous public good is associated with the persistence of utility and insurance products, and that fairly apportioning the costs of their operations requires differentiating between customers on the basis of their use. In point of fact (as we’ve known at least since Ronald Coase pointed it out in 1946), the most efficient and most equitable pricing structure for such products is a two-part tariff incorporating both a fixed, base rate, as well as a variable charge based on usage.  

Pricing models that don’t account for the extent of customer use are vanishingly rare. “All-inclusive” experiences like Club Med or the Golden Corral all-you-can-eat buffet are the exception and not the rule when it comes to consumer goods. And it is well-understood that such examples adopt effectively regressive pricing — charging everyone a high enough price to ensure that they earn sufficient return from the vast majority of light eaters to offset the occasional losses from the gorgers. For most eaters, in other words, a buffet lunch tends to cost more and deliver less than a menu-based lunch. 

All of which is to say that the typical ISP pricing model — in which charges are based on a generous, and historically growing, basic tier coupled with an additional charge that increases with data use that exceeds the basic allotment — is utterly unremarkable. Rather, the mandatory imposition of uniform or flat-fee pricing would be an aberration.

Aligning network costs with usage

Throughout its history, Internet usage has increased constantly and often dramatically. This ever-growing need has necessitated investment in US broadband infrastructure running into the tens of billions annually. Faced with the need for this investment, UBB is a tool that helps to equitably align network costs with different customers’ usage levels in a way that promotes both access and resilience.

As President Obama’s first FCC Chairman, Julius Genachowski, put it:

Our work has also demonstrated the importance of business innovation to promote network investment and efficient use of networks, including measures to match price to cost such as usage-based pricing.

Importantly, it is the marginal impact of the highest-usage customers that drives a great deal of those network investment costs. In the case of one ISP, a mere 5 percent of residential users make up over 20 percent of its network usage. Necessarily then, in the absence of UBB and given the constant need for capacity expansion, uniform pricing would typically act to disadvantage low-volume customers and benefit high-volume customers.

Even Tom Wheeler — President Obama’s second FCC Chairman and the architect of utility-style regulation of ISPs — recognized this fact and chose to reject proposals to ban UBB in the 2015 Open Internet Order, explaining that:

[P]rohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks. (emphasis added)

When it comes to expanding Internet connectivity, the policy ramifications of uniform pricing are regressive. As such, they run counter to the stated goals of policymakers across the political spectrum insofar as they deter low-volume users — presumably, precisely the marginal users who may be disinclined to subscribe in the first place —  from subscribing by saddling them with higher prices than they would face with capacity pricing. Closing the digital divide means supporting the development of a network that is at once sustainable and equitable on the basis of its scope and use. Mandated uniform pricing accomplishes neither.

Of similarly profound importance is the need to ensure that Internet infrastructure is ready for demand shocks, as we saw with the COVID-19 crisis. Linking pricing to usage gives ISPs the incentive and wherewithal to build and maintain high-capacity networks to cater to the ever-growing expectations of high-volume users, while also encouraging the adoption of network efficiencies geared towards conserving capacity (e.g., caching, downloading at off-peak hours rather than streaming during peak periods).

Contrary to the claims of some that the success of ISPs’ networks during the COVID-19 crisis shows that UBB is unnecessary and extractive, the recent increases in network usage (which may well persist beyond the eventual end of the crisis) demonstrate the benefits of nonlinear pricing models like UBB. Indeed, the consistent efforts to build out the network to serve high-usage customers, funded in part by UBB, redounds not only to the advantage of abnormal users in regular times, but also to the advantage of regular users in abnormal times.

The need for greater capacity along with capacity-conserving efficiencies has been underscored by the scale of the demand shock among high-load users resulting from COVID-19. According to OpenVault, a data use tracking service, the number of “power users” and “extreme power users” utilizing 1TB/month or more and 2TB/month or more jumped 138 percent and 215 percent respectively. Meaning that now, in total, power users represent 10 percent of subscribers across the network, while extreme power users comprise 1.2 percent of subscribers.

Pricing plans predicated on load volume necessarily evolve along with network capacity, but at this moment the application of UBB for monthly loads above 1TB ensures that ISPs maintain an incentive to cater to power users and extreme power users alike. In doing so, ISPs are also ensuring that all users are protected when the Internet’s next abnormal — but, sadly, predictable — event arrives.

At the same time, UBB also helps to facilitate the sort of customer-side network efficiencies that may emerge as especially important during times of abnormally elevated demand. Customers’ usage need not be indifferent to the value of the data they use, and usage-based pricing helps to ensure that data usage aligns not only with costs but also with the data’s value to consumers. In this way the behavior of both ISPs and customers will better reflect the objective realities of the nations’ networks and their limits.

The case for pricing freedom

Finally, it must be noted that ISPs are not all alike, and that the market sustains a range of pricing models across ISPs according to what suits their particular business models, network characteristics, load capacity, and user types (among other things). Consider that even ISPs that utilize UBB almost always offer unlimited data products, while some ISPs choose to adopt uniform pricing to differentiate their offerings. In fact, at least one ISP has moved to uniform billing in light of COVID-19 to provide their customers with “certainty” about their bills.

The mistake isn’t in any given ISP electing a uniform billing structure or a usage-based billing structure; rather it is in proscribing the use of a single pricing structure for all ISPs. Claims that such price controls are necessary because consumers are harmed by UBB ignore its prevalence across the economy, its salutary effect on network access and resilience, and the manner in which it promotes affordability and a sensible allocation of cost recovery across consumers.

Moreover, network costs and traffic demand patterns are dynamic, and the availability of UBB — among other pricing schemes — also allows ISPs to tailor their offerings to those changing conditions in a manner that differentiates them from their competitors. In doing so, those offerings are optimized to be attractive in the moment, while still facilitating network maintenance and expansion in the future.

Where economically viable, more choice is always preferable. The notion that consumers will somehow be harmed if they get to choose Internet services based not only on speed, but also load, is a specious product of the confused and the unfamiliar. The sooner the stigma around UBB is overcome, the better-off the majority of US broadband customers will be.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

The great Dr. Thomas Sowell

One of the great scholars of law & economics turns 90 years old today. In his long and distinguished career, Thomas Sowell has written over 40 books and countless opinion columns. He has been a professor of economics and a long-time Senior Fellow at the Hoover Institution. He received a National Humanities Medal in 2002 for a lifetime of scholarship, which has only continued since then. His ability to look at issues with an international perspective, using the analytical tools of economics to better understand institutions, is an inspiration to us at the International Center for Law & Economics.

Here, almost as a blog post festschrift as a long-time reader of his works, I want to briefly write about how Sowell’s voluminous writings on visions, law, race, and economics could be the basis for a positive agenda to achieve a greater measure of racial justice in the United States.

The Importance of Visions

One of the most important aspects of Sowell’s work is his ability to distill wide-ranging issues into debates involving different mental models, or a “Conflict of Visions.” He calls one vision the “tragic” or “constrained” vision, which sees all humans as inherently limited in knowledge, wisdom, and virtue, and fundamentally self-interested even at their best. The other vision is the “utopian” or “unconstrained” vision, which sees human limitations as artifacts of social arrangements and cultures, and that there are some capable by virtue of superior knowledge and morality that can redesign society to create a better world. 

An implication of the constrained vision is that the difference in knowledge and virtue between the best and the worst in society is actually quite small. As a result, no one person or group of people can be trusted with redesigning institutions which have spontaneously evolved. The best we can hope for is institutions that reasonably deter bad conduct and allow people the freedom to solve their own problems. 

An important implication of the unconstrained vision, on the other hand,  is that there are some who because of superior enlightenment, which Sowell calls the “Vision of the Anointed,” can redesign institutions to fundamentally change human nature, which is seen as malleable. Institutions are far more often seen as the result of deliberate human design and choice, and that failures to change them to be more just or equal is a result of immorality or lack of will.

The importance of visions to how we view things like justice and institutions makes all the difference. In the constrained view, institutions like language, culture, and even much of the law result from the “spontaneous ordering” that is the result of human action but not of human design. Limited government, markets, and tradition are all important in helping individuals coordinate action. Markets work because self-interested individuals benefit when they serve others. There are no solutions to difficult societal problems, including racism, only trade-offs. 

But in the unconstrained view, limits on government power are seen as impediments to public-spirited experts creating a better society. Markets, traditions, and cultures are to be redesigned from the top down by those who are forward-looking, relying on their articulated reason. There is a belief that solutions could be imposed if only there is sufficient political will and the right people in charge. When it comes to an issue like racism, those who are sufficiently “woke” should be in charge of redesigning institutions to provide for a solution to things like systemic racism.

For Sowell, what he calls “traditional justice” is achieved by processes that hold people accountable for harms to others. Its focus is on flesh-and-blood human beings, not abstractions like all men or blacks versus whites. Differences in outcomes are not just or unjust, by this point of view, what is important is that the processes are just. These processes should focus on institutional incentives of participants. Reforms should be careful not to upset important incentive structures which have evolved over time as the best way for limited human beings to coordinate behavior.

The “Quest for Cosmic Justice,” on the other hand, flows from the unconstrained vision. Cosmic justice sees disparities between abstract groups, like whites and blacks, as unjust and in need of correction. If results from impartial processes like markets or law result in disparities, those with an unconstrained vision often see those processes as themselves racist. The conclusion is that the law should intervene to create better outcomes. This presumes considerable knowledge and morality on behalf of those who are in charge of the interventions. 

For Sowell, a large part of his research project has been showing that those with the unconstrained vision often harm those they are proclaiming the intention to help in their quest for cosmic justice. 

A Constrained Vision of Racial Justice

Sowell has written quite a lot on race, culture, intellectuals, economics, and public policy. One of the main thrusts of his argument about race is that attempts at cosmic justice often harm living flesh-and-blood individuals in the name of intertemporal abstractions like “social justice” for black Americans. Sowell nowhere denies that racism is an important component of understanding the history of black Americans. But his constant challenge is that racism can’t be the only variable which explains disparities. Sowell points to the importance of culture and education in building human capital to be successful in market economies. Without taking those other variables into account, there is no way to determine the extent that racism is the cause of disparities. 

This has important implications for achieving racial justice today. When it comes to policies pursued in the name of racial justice, Sowell has argued that many programs often harm not only members of disfavored groups, but the members of the favored groups.

For instance, Sowell has argued that affirmative action actually harms not only flesh-and-blood white and Asian-Americans who are passed over, but also harms those African-Americans who are “mismatched” in their educational endeavors and end up failing or dropping out of schools when they could have been much better served by attending schools where they would have been very successful. Another example Sowell often points to is minimum wage legislation, which is often justified in the name of helping the downtrodden, but has the effect of harming low-skilled workers by increasing unemployment, most especially young African-American males. 

Any attempts at achieving racial justice, in terms of correcting historical injustices, must take into account how changes in processes could actually end up hurting flesh-and-blood human beings, especially when those harmed are black Americans. 

A Positive Agenda for Policy Reform

In Sowell’s constrained vision, a large part of the equation for African-American improvement is going to be cultural change. However, white Americans should not think that this means they have no responsibility in working towards racial justice. A positive agenda must take into consideration real harms experienced by African-Americans due to government action (and inaction). Thus, traditional justice demands institutional reforms, and in some cases, recompense.

The policy part of this equation outlined below is motivated by traditional justice concerns that hold people accountable under the rule of law for violations of constitutional rights and promotes institutional reforms to more properly align incentives. 

What follows below are policy proposals aimed at achieving a greater degree of racial justice for black Americans, but fundamentally informed by the constrained vision and traditional justice concerns outlined by Sowell. Most of these proposals are not on issues Sowell has written a lot on. In fact, some proposals may actually not be something he would support, but are—in my opinion—consistent with the constrained vision and traditional justice.

Reparations for Historical Rights Violations

Sowell once wrote this in regards to reparations for black Americans:

Nevertheless, it remains painfully clear that those people who were torn from their homes in Africa in centuries past and forcibly brought across the Atlantic in chains suffered not only horribly, but unjustly. Were they and their captors still alive, the reparations and retribution owed would be staggering. Time and death, however, cheat us of such opportunities for justice, however galling that may be. We can, of course, create new injustices among our flesh-and-blood contemporaries for the sake of symbolic expiation, so that the son or daughter of a black doctor or executive can get into an elite college ahead of the son or daughter of a white factory worker or farmer, but only believers in the vision of cosmic justice are likely to take moral solace from that. We can only make our choices among alternatives actually available, and rectifying the past is not one of those options.

In other words, if the victims and perpetrators of injustice are no longer alive, it is not just to hold entire members of respective races accountable for crimes which they did not commit. However, this would presumably leave open the possibility of applying traditional justice concepts in those cases where death has not cheated us.

For instance, there are still black Americans alive who suffered from Jim Crow, as well as children and family members of those lynched. While it is too little, too late, it seems consistent with traditional justice to still seek out and prosecute criminally perpetrators who committed heinous acts but a few generations ago against still living victims. This is not unprecedented. Old Nazis are still prosecuted for crimes against Jews. A similar thing could be done in the United States.

Similarly, civil rights lawsuits for the damages caused by Jim Crow could be another way to recompense those who were harmed. Alternatively, it could be done by legislation. The Civil Liberties Act of 1988 was passed under President Reagan and gave living Japanese Americans who were interned during World War II some limited reparations. A similar system could be set up for living victims of Jim Crow. 

Statutes of limitations may need to be changed to facilitate these criminal prosecutions and civil rights lawsuits, but it is quite clearly consistent with the idea of holding flesh-and-blood persons accountable for their unlawful actions.

Holding flesh-and-blood perpetrators accountable for rights violations should not be confused with the cosmic justice idea—that Sowell consistently decries—that says intertemporal abstractions can be held accountable for crimes. In other words, this is not holding “whites” accountable for all historical injustices to “blacks.” This is specifically giving redress to victims and deterring future bad conduct.  

End Qualified Immunity

Another way to promote racial justice consistent with the constrained vision is to end one of the Warren Court’s egregious examples of judicial activism: qualified immunity. Qualified immunity is nowhere mentioned in the statute for civil rights, 42 USC § 1983. As Sowell argues in his writings, judges in the constrained vision are supposed to declare what the law is, not what they believe it should be, unlike those in the unconstrained vision who—according to Sowell— believe they have the right to amend the laws through judicial edict. The introduction of qualified immunity into the law by the activist Warren Court should be overturned.

Currently, qualified immunity effectively subsidizes police brutality, to the detriment of all Americans, but disproportionately affecting black Americans. The law & economics case against qualified immunity is pretty straightforward: 

In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.

However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity… courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it… This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.

Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity… The result is to encourage police officers to take insufficient care when making the choice about the level of force to use. 

Those with a constrained vision focus on processes and incentives. In this case, it is police officers who have insufficient incentives to take reasonable care when they receive qualified immunity for their conduct.

End the Drug War

While not something he has written a lot on, Sowell has argued for the decriminalization of drugs, comparing the War on Drugs to the earlier attempts at Prohibition of alcohol. This is consistent with the constrained vision, which cares about the institutional incentives created by law. 

Interestingly, work by Michelle Alexander in the second chapter of The New Jim Crow is largely consistent with Sowell’s point of view. There she argued the institutional incentives of police departments were systematically changed when the drug war was ramped up. 

Alexander asks a question which is right in line with the constrained vision:

[I]t is fair to wonder why the police would choose to arrest such an astonishing percentage of the American public for minor drug crimes. The fact that police are legally allowed to engage in a wholesale roundup of nonviolent drug offenders does not answer the question why they would choose to do so, particularly when most police departments have far more serious crimes to prevent and solve. Why would police prioritize drug-law enforcement? Drug use and abuse is nothing new; in fact, it was on the decline, not on the rise, when the War on Drugs began.

Alexander locates the impetus for ramping up the drug war in federal subsidies:

In 1988, at the behest of the Reagan administration, Congress revised the program that provides federal aid to law enforcement, renaming it the Edward Byrne Memorial State and Local Law Enforcement Assistance Program after a New York City police officer who was shot to death while guarding the home of a drug-case witness. The Byrne program was designed to encourage every federal grant recipient to help fight the War on Drugs. Millions of dollars in federal aid have been offered to state and local law enforcement agencies willing to wage the war. By the late 1990s, the overwhelming majority of state and local police forces in the country had availed themselves of the newly available resources and added a significant military component to buttress their drug-war operations. 

On top of that, police departments were benefited by civil asset forfeiture:

As if the free military equipment, training, and cash grants were not enough, the Reagan administration provided law enforcement with yet another financial incentive to devote extraordinary resources to drug law enforcement, rather than more serious crimes: state and local law enforcement agencies were granted the authority to keep, for their own use, the vast majority of cash and assets they seize when waging the drug war. This dramatic change in policy gave state and local police an enormous stake in the War on Drugs—not in its success, but in its perpetual existence. Suddenly, police departments were capable of increasing the size of their budgets, quite substantially, simply by taking the cash, cars, and homes of people suspected of drug use or sales. Because those who were targeted were typically poor or of moderate means, they often lacked the resources to hire an attorney or pay the considerable court costs. As a result, most people who had their cash or property seized did not challenge the government’s action, especially because the government could retaliate by filing criminal charges—baseless or not.

As Alexander notes, black Americans (and other minorities) were largely targeted in this ramped up War on Drugs, noting the drug war’s effects have been to disproportionately imprison black Americans even though drug usage and sales are relatively similar across races. Police officers have incredible discretion in determining who to investigate and bring charges against. When it comes to the drug war, this discretion is magnified because the activity is largely consensual, meaning officers can’t rely on victims to come to them to start an investigation. Alexander finds the reason the criminal justice system has targeted black Americans is because of implicit bias in police officers, prosecutors, and judges, which mirrors the bias shown in media coverage and in larger white American society. 

Anyone inspired by Sowell would need to determine whether this is because of racism or some other variable. It is important to note here that Sowell never denies that racism exists or is a real problem in American society. But he does challenge us to determine whether this alone is the cause of disparities. Here, Alexander makes a strong case that it is implicit racism that causes the disparities in enforcement of the War on Drugs. A race-neutral explanation could be as follows, even though it still suggests ending the War on Drugs: the enforcement costs against those unable to afford to challenge the system are lower. And black Americans are disproportionately represented among the poor in this country. As will be discussed below in the section on reforming indigent criminal defense, most prosecutions are initiated against defendants who can’t afford a lawyer. The result could be racially disparate even without a racist motivation. 

Regardless of whether racism is the variable that explains the disparate impact of the War on Drugs, it should be ended. This may be an area where traditional and cosmic justice concerns can be united in an effort to reform the criminal justice system.

Reform Indigent Criminal Defense

A related aspect of how the criminal justice system has created a real barrier for far too many black Americans is the often poor quality of indigent criminal defense. Indigent defense is a large part of criminal defense in this country since a very high number of criminal prosecutions are initiated against those who are often too poor to afford a lawyer (roughly 80%). Since black Americans are disproportionately represented among the indigent and those in the criminal justice system, it should be no surprise that black Americans are disproportionately represented by public defenders in this country.

According to the constrained vision, it is important to look at the institutional incentives of public defenders. Considering the extremely high societal costs of false convictions, it is important to get these incentives right.

David Friedman and Stephen Schulhofer’s seminal article exploring the law & economics of indigent criminal defense highlighted the conflict of interest inherent in government choosing who represents criminal defendants when the government is in charge of prosecuting. They analyzed each of the models used in the United States for indigent defense from an economic point of view and found each wanting. On top of that, there is also a calculation problem inherent in government-run public defender’s offices whereby defendants may be systematically deprived of viable defense strategies because of a lack of price signals. 

An interesting alternative proposed by Friedman and Schulhofer is a voucher system. This is similar to the voucher system Sowell has often touted for education. The idea would be that indigent criminal defendants get to pick the lawyer of their choosing that is part of the voucher program. The government would subsidize the provision of indigent defense, in this model, but would not actually pick the lawyer or run the public defender organization. Incentives would be more closely aligned between the defendant and counsel. 

Conclusion

While much more could be said consistent with the constrained vision that could help flesh-and-blood black Americans, including abolishing occupational licensing, ending wage controls, promoting school choice, and ending counterproductive welfare policies, this is enough for now. Racial justice demands holding rights violators accountable and making victims whole. Racial justice also means reforming institutions to make sure incentives are right to deter conduct which harms black Americans. However, the growing desire to do something to promote racial justice in this country should not fall into the trap of cosmic justice thinking, which often ends up hurting flesh-and-blood people of all races in the present in the name of intertemporal abstractions. 

Happy 90th birthday to one of the greatest law & economics scholars ever, Dr. Thomas Sowell. 

Last month the EU General Court annulled the EU Commission’s decision to block the proposed merger of Telefónica UK by Hutchison 3G UK. 

It what could be seen as a rebuke of the Directorate-General for Competition (DG COMP), the court clarified the proof required to block a merger, which could have a significant effect on future merger enforcement:

In the context of an analysis of a significant impediment to effective competition the existence of which is inferred from a body of evidence and indicia, and which is based on several theories of harm, the Commission is required to produce sufficient evidence to demonstrate with a strong probability the existence of significant impediments following the concentration. Thus, the standard of proof applicable in the present case is therefore stricter than that under which a significant impediment to effective competition is “more likely than not,” on the basis of a “balance of probabilities,” as the Commission maintains. By contrast, it is less strict than a standard of proof based on “being beyond all reasonable doubt.”

Over the relevant time period, there were four retail mobile network operators in the United Kingdom: (1) EE Ltd, (2) O2, (3) Hutchison 3G UK Ltd (“Three”), and (4) Vodafone. The merger would have combined O2 and Three, which would account for 30-40% of the retail market. 

The Commission argued that Three’s growth in market share over time and its classification as a “maverick” demonstrated that Three was an “important competitive force” that would be eliminated with the merger. The court was not convinced: 

The mere growth in gross add shares over several consecutive years of the smallest mobile network operator in an oligopolistic market, namely Three, which has in the past been classified as a “maverick” by the Commission (Case COMP/M.5650 — T-Mobile/Orange) and in the Statement of Objections in the present case, does not in itself constitute sufficient evidence of that operator’s power on the market or of the elimination of the important competitive constraints that the parties to the concentration exert upon each other.

While the Commission classified Three as a maverick, it also claimed that maverick status was not necessary to be an important competitive force. Nevertheless, the Commission pointed to Three’s history of maverick-y behavior by launching its “One Plan” as well as free international roaming and offering 4G at no additional cost. The court, however, noted that those initiatives were “historical in nature,” and provided no evidence of future conduct: 

The Commission’s reasoning in that regard seems to imply that an undertaking which has historically played a disruptive role will necessarily play the same role in the future and cannot reposition itself on the market by adopting a different pricing policy.

The EU General Court appears to express the same frustration with mavericks as the court in in H&R Block/TaxACT: “The arguments over whether TaxACT is or is not a ‘maverick’ — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis.”

With the General Court’s recent decision raising the bar of proof required to block a merger, it also provided a “strong probability” that the days of maverick madness may soon be over.  

Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech. 

In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.  

While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby  undermining long-standing conservative principles and the ability of conservatives to be treated fairly online. 

There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated. 

Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them. 

It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.

Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional. 

But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.

These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation  completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation. 

This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.

Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.

It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about. 

This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content  until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:

I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.

And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.

Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law.

State bar associations, with the backing of state judiciaries and legislatures, are typically entrusted with a largely unqualified monopoly over licensing in legal services markets. This poses an unavoidable policy tradeoff. Designating the bar as gatekeeper might protect consumers by ensuring a minimum level of service quality. Yet the gatekeeper is inherently exposed to influence by interests with an economic stake in the existing market. Any licensing requirement that might shield uninformed consumers from unqualified or opportunistic lawyers also necessarily raises an entry barrier that protects existing lawyers against more competition. A proper concern for consumer welfare therefore requires that the gatekeeper impose licensing requirements only when they ensure that the efficiency gains attributable to a minimum quality threshold outweigh the efficiency losses attributable to constraints on entry.

There is increasing reason for concern that state bar associations are falling short of this standard. In particular, under the banner of “legal ethics,” some state bar associations and courts have blocked or impeded entry by innovative “legaltech” services without a compelling consumer protection rationale.

The LegalMatch Case: A misunderstood platform

This trend is illustrated by a recent California appellate court decision interpreting state regulations pertaining to legal referral services. In Jackson v. LegalMatch, decided in late 2019, the court held that LegalMatch, a national online platform that matches lawyers and potential clients, constitutes an illegal referral service, even though it is not a “referral service” under the American Bar Association’s definition of the term, and the California legislature had previously declined to include online services within the statutory definition.

The court’s reasoning: the “marketing” fee paid by subscribing attorneys to participate in the platform purportedly runs afoul of state regulations that proscribe attorneys from paying a fee to referral services that have not been certified by the bar. (The lower court had felt differently, finding that LegalMatch was not a referral service for this purpose, in part because it did not “exercise any judgment” on clients’ legal issues.)

The court’s formalist interpretation of applicable law overlooks compelling policy arguments that strongly favor facilitating, rather than obstructing, legal matching services. In particular, the LegalMatch decision illustrates the anticompetitive outcomes that can ensue when courts and regulators blindly rely on an unqualified view of platforms as an inherent source of competitive harm.

Contrary to this presumption, legal services referral platforms enhance competition by reducing transaction-cost barriers to efficient lawyer-client relationships. These matching services benefit consumers that otherwise lack access to the full range of potential lawyers and smaller or newer law firms that do not have the marketing resources or brand capital to attract the full range of potential clients. Consistent with the well-established economics of platform markets, these services operate under a two-sided model in which the unpriced delivery of attorney information to potential clients is financed by the positively priced delivery of interested clients to subscribing attorneys. Without this two-sided fee structure, the business model collapses and the transaction-cost barriers to matching the credentials of tens of thousands of lawyers with the preferences of millions of potential clients are inefficiently restored. Some legal matching platforms also offer fixed-fee service plans that can potentially reduce legal representation costs relative to the conventional billable hour model that can saddle clients with unexpectedly or inappropriately high legal fees given the difficulty in forecasting the required quantity of legal services ex ante and measuring the quality of legal services ex post.

Blocking entry by these new business models is likely to adversely impact competition and, as observed in a 2018 report by an Illinois bar committee, to injure lower-income consumers in particular. The result is inefficient, regressive, and apparently protectionist.

Indeed, subsequent developments in this litigation are regrettably consistent with the last possibility. After the California bar prevailed in its legal interpretation of “referral service” at the appellate court, and the Supreme Court of California declined to review the decision, LegalMatch then sought to register as a certified lawyer referral service with the bar. The bar responded by moving to secure a temporary restraining order against the continuing operation of the platform. In May 2020, a lower state court judge both denied the petition and expressed disappointment in the bar’s handling of the litigation.

Bar associations’ puzzling campaign against “LegalTech” innovation

This case of regulatory overdrive is hardly unique to the LegalMatch case. Bar associations have repeatedly acted to impede entry by innovators that deploy digital technologies to enhance legal services, which can drive down prices in a field that is known for meager innovation and rigid pricing. Puzzlingly from a consumer welfare perspective, the bar associations have taken actions that impede or preclude entry by online services that expand opportunities for lawyers, increase the information available to consumers, and, in certain cases, place a cap on maximum legal fees.

In 2017, New Jersey Supreme Court legal ethics committees, following an “inquiry” by the state bar association, prohibited lawyers from partnering with referral services and legal services plans offered by Avvo, LegalZoom, and RocketLawyer. In 2018, Avvo discontinued operations due in part to opposition from multiple state bar associations (often backed up by state courts).

In some cases, bar associations have issued advisory opinions that, given the risk of disciplinary action, can have an in terrorem effect equivalent to an outright prohibition. In 2018, the Indiana Supreme Court Disciplinary Commission issued a “nonbinding advisory” opinion stating that attorneys who pay “marketing fees” to online legal referral services or agree to fixed-fee arrangements with such services “risk violation of several Indiana [legal] ethics rules.”

State bar associations similarly sought to block the entry of LegalZoom, an online provider of standardized legal forms that can be more cost-efficient for “cookie-cutter” legal situations than the traditional legal services model based on bespoke document preparation. These disputes are protracted and costly: it took LegalZoom seven years to reach a settlement with the North Carolina State Bar that allowed it to continue operating in the state. In a case pending before the Florida Supreme Court, the Florida bar is seeking to shut down a smartphone application that enables drivers to contest traffic tickets at a fixed fee, a niche in which the traditional legal services model is likely to be cost-inefficient given the relatively modest amounts that are typically involved.

State bar associations, with supporting action or inaction by state courts and legislatures, have ventured well beyond the consumer protection rationale that is the only potentially publicly-interested justification for the bar’s licensing monopoly. The results sometimes border on absurdity. In 2006, the New Jersey bar issued an opinion precluding attorneys from stating in advertisements that they had appeared in an annual “Super Lawyers” ranking maintained by an independent third-party publication. In 2008, based on a 304-page report prepared by a “special master,” the bar’s ethics committee vacated the opinion but merely recommended further consideration taking into account “legitimate commercial speech activities.” In 2012, the New York legislature even changed the “unlicensed practice of law” from a misdemeanor to a felony, an enhancement proposed by . . . the New York bar (see here and here). 

In defending their actions against online referral services, the bar associations argue that these steps are necessary to defend the public’s interest in receiving legal advice free from any possible conflict of interest. This is a presumptively weak argument. The associations’ licensing and other requirements are inherently tainted throughout by a “meta” conflict of interest. Hence it is the bar that rightfully bears the burden in demonstrating that any such requirement imposes no more than a reasonably necessary impediment to competition. This is especially so given that each bar association often operates its own referral service.

The unrealized potential of North Carolina State Board of Dental Examiners v. FTC

Bar associations might nonetheless take the legal position that they have statutory or regulatory discretion to take these actions and therefore any antitrust scrutiny is inapposite. If that argument ever held water, that is clearly no longer the case.

In an undeservedly underapplied decision, North Carolina State Board of Dental Examiners v. FTC, the Supreme Court held definitively in 2015 that any action by a “non-sovereign” licensing entity is subject to antitrust scrutiny unless that action is “actively supervised” by, and represents a “clearly articulated” policy of, the state. The Court emphasized that the degree of scrutiny is highest for licensing bodies administered by constituencies in the licensed market—precisely the circumstances that characterize state bar associations.

The North Carolina decision is hardly an outlier. It followed a string of earlier cases in which the Court had extended antitrust scrutiny to a variety of “hard” rules and “soft” guidance that bar associations had issued and defended on putatively publicly-interested grounds of consumer protection or legal ethics.

At the Court, the bar’s arguments did not meet with success. The Court rejected any special antitrust exemption for a state bar association’s “advisory” minimum fee schedule (Goldfarb v. Virginia State Bar (1975)) and, in subsequent cases, similarly held that limitations by professional associations on advertising by members—another requirement to “protect” consumers—do not enjoy any special antitrust exemption. The latter set of cases addressed specifically both advertising restrictions on price and quality by a California dental association (California Dental Association v. FTC (1999) ) and blanket restrictions on advertising by a bar association (Bates v. State Bar of Arizona (1977 )). As suggested by the bar associations’ recent actions toward online lawyer referral services, the Court’s consistent antitrust decisions in this area appear to have had relatively limited impact in disciplining potentially protectionist actions by professional associations and licensing bodies, at least in the legal services market. 

A neglected question: Is the regulation of legal services anticompetitive?

The current economic situation poses a unique historical opportunity for bar associations to act proactively by enlisting independent legal and economic experts to review each component of the current licensing infrastructure and assess whether it passes the policy tradeoff between protecting consumers and enhancing competition. If not, any such component should be modified or eliminated to elicit competition that can leverage digital technologies and managerial innovations—often by exploiting the efficiencies of multi-sided platform models—that have been deployed in other industries to reduce prices and transaction costs. These modifications would expand access to legal services consistent with the bar’s mission and, unlike existing interventions to achieve this objective through government subsidies, would do so with a cost to the taxpayer of exactly zero dollars.

This reexamination exercise is arguably demanded by the line of precedent anchored in the Goldfarb and Bates decisions in 1975 and 1977, respectively, and culminating in the North Carolina Dental decision in 2015. This line of case law is firmly grounded in antitrust law’s underlying commitment to promote consumer welfare by deterring collective action that unjustifiably constrains the free operation of competitive forces. In May 2020, the California bar took a constructive if tentative step in this direction by reviving consideration of a “regulatory sandbox” to facilitate experimental partnerships between lawyers and non-lawyers in pioneering new legal services models. This follows somewhat more decisive action by the Utah Supreme Court, which in 2019 approved commencing a staged process that may modify regulation of the legal services market, including lifting or relaxing restrictions on referral fees and partnerships between lawyers and non-lawyers.

Neither the legal profession generally nor the antitrust bar in particular has allocated substantial attention to potentially anticompetitive elements in the manner in which the practice of law has long been regulated. Restrictions on legal referral services are only one of several practices that deserve a closer look under the policy principles and legal framework set forth most recently in North Carolina Dental and previously in California Dental. A few examples can illustrate this proposition. 

Currently limitations on partnerships between lawyers and non-lawyers constrain the ability to achieve economies of scale and scope in the delivery of legal services and preclude firms from offering efficient bundles of complementary legal and non-legal services. Under a more surgical regulatory regime, legal services could be efficiently bundled with related accounting and consulting services, subject to appropriately targeted precautions against conflicts of interest. Additionally, as other commentators have observed and as “legaltech” innovations demonstrate, software could be more widely deployed to provide “direct-to-consumer” products that deliver legal services at a far lower cost than the traditional one-on-one lawyer-client model, subject to appropriately targeted precautions that reflect informational asymmetries in individual and small-business legal markets.

In another example, the blanket requirement of seven years of undergraduate and legal education raises entry costs that are not clearly justified for all areas of legal practice, some of which could potentially be competently handled by practitioners with intermediate categories of legal training. These are just two out of many possibilities that could be constructively explored under a more antitrust-sensitive approach that takes seriously the lessons of North Carolina Dental and the competitive risks inherent to lawyer self-regulation of legal services markets. (An alternative and complementary policy approach would be to move certain areas of legal services regulation out of the hands of the legal profession entirely.)

Conclusion

The LegalMatch case is indicative of a largely unexploited frontier in the application of antitrust law and principles to the practice of law itself. While commentators have called attention to the antitrust concerns raised by the current regulatory regime in legal services markets, and the evolution of federal case law has increasingly reflected these concerns, there has been little practical action by state bar associations, the state judiciary or state legislatures. This might explain why the delivery of legal services has changed relatively little during the same period in which other industries have been transformed by digital technologies, often with favorable effects for consumers in the form of increased convenience and lower costs. There is strong reason to believe a rigorous and objective examination of current licensing and related limitations imposed by bar associations in legal services markets is likely to find that many purportedly “ethical” requirements, at least when applied broadly and without qualification, do much to inhibit competition and little to protect consumers.