Facebook Lenses

While I was mostly unplugged on my vacation last week, with the news of Facebook’s disappointing earnings report and subsequent stock decline — the largest one-day loss by any company in U.S. stock market history — I couldn’t resist chiming in on Twitter:

I do regret the tweet a tad, and not only because “chiming in on Twitter” is always risky. Back when Stratechery started I wrote in the very first post that one of the topics I looked forward to exploring was “Why Wall Street is not completely insane”; I was thinking at the time about Apple, a company that, especially at that time, was regularly posting eye-popping revenue and profit numbers that did not necessarily lead to corresponding increases in the stock price, much to the consternation of Apple shareholders. The underlying point should be an obvious one: a stock price is about future earnings, not already realized ones; that the iPhone maker had just had a great quarter was an important signal about the future, but not a determinant factor, and that those pointing to the past to complain about a price predicated on the future were missing the point.

Of course that is exactly what I did in that tweet.

It’s worth noting, though, that while the explicit reasoning of those Apple stockholders may have been suspect, their sentiment has proven correct: in April 2013 Apple reported quarterly revenue of $43.6 billion and profit of $9.5 billion, and the day I started Stratechery the stock price was $63.25; five years later Apple reported quarterly revenue of $61.1 billion and profit of $13.8 billion, and on Friday the stock price was $190.98.

To be clear, I agreed with the Apple-investor sentiment all along: several of my early articles — Apple the Black Swan, Two Bears, and especially What Clayton Christensen Got Wrong — were about making the case that Apple’s business was far more sustainable with much deeper moats than most people realized, and it was that sustainability and defensibility that mattered more than any one quarter’s results.

The question is if a similar case can be made for Facebook: certainly my tweet taken literally was naive for the exact reasons those Apple investor complaints missed the point five years ago; what about the sentiment, though? Just how good of a business is Facebook?

As with many such things, it all depends on what lens you use to examine the question.

Lens 1: Facebook’s Finances

As is often the case with earnings, the move in Facebook’s stock was only a bit about the results and whole lot about future expectations. On Wednesday, Facebook’s stock closed at $217, but then its earnings showed revenue of $13.2 billion, slightly below Wall Street’s expectations; unsurprisingly, the stock slid about 8% in after-hours trading to around $200. The real drop was spurred by two comments on the earnings call from Facebook CFO Dave Wehner about Facebook’s expectations going forward.

First, with regards to revenue:

Turning now to the revenue outlook; our total revenue growth rate decelerated approximately 7 percentage points in Q2 compared to Q1. Our total revenue growth rates will continue to decelerate in the second half of 2018, and we expect our revenue growth rates to decline by high-single digit percentages from prior quarters sequentially in both Q3 and Q4.

Second, with regards to operating margin:

Turning now to expenses; we continue to expect that full-year 2018 total expenses will grow in the range of 50% to 60% compared to last year…Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019. Over the next several years, we would anticipate that our operating margins will trend towards the mid-30s on a percentage basis.

From a purely financial perspective, both pieces of news are are less than ideal but at least understandable. In terms of revenue, Facebook’s growth is from a very large base, which means that this quarter’s 42% year-over-year revenue growth to $13.2 billion from $9.3 billion is, in absolute terms, 36% greater than the year ago’s 45% revenue growth (from $6.4 billion). To put it in simpler terms, massive growth rates inevitably decline even as massive absolute growth remains; as a point of comparison, Google in the same relative timeframe (14 years after incorporation) grew 35 percent to $12.21 billion (i.e. Facebook is better on both metrics).

As far as the operating margin decline, in a normal company — i.e. one with marginal costs — a revenue decrease would not necessarily lead to a meaningful decline in margin since selling fewer products would mean lower costs of goods sold. Facebook, of course, is not a normal company: the only marginal costs for the ads they sell are credit card fees; like most tech companies the vast majority of costs are “below the line” (mostly in Research & Development, but also Sales & Marketing and General & Administrative); it follows, then, that a decrease in revenue growth would, absent an explicit effort to decrease unrelated (to revenue) expense growth, lead to lower operating margins.

In fact, Facebook is not only not decreasing expenses, they are going in the opposite direction; expenses are growing faster than ever, even as revenue growth clearly fell off:

Facebook's revenue growth is decreasing even as its expense growth increases

I suspect it is this chart, more than anything else, that explains the drop in Facebook’s stock price: it’s not one thing or the other; it is both revenue growth slowing and expenses accelerating at the same time, with all indications from management are that the trends will continue.

Again, relatively speaking Facebook is in great shape financially — I already noted the company had better revenue and growth numbers than Google at a similar point, and their operating margins are substantially better as well — but there’s no question this is a pretty substantial shift in the company’s longterm outlook. The financial lens still provides a pretty positive view, but it is indeed less positive than before.

Lens 2: Facebook’s Products

It is always a bit confusing to write about Facebook, because there is both Facebook the company and Facebook the product, and there is no question the greatest amount of negativity has, for several years now, been centered around the latter. To that end, it is tempting to conflate the two; for example, the New York Times wrote in an article headlined Facebook Starts Paying a Price for Scandals:

For nearly two years, Facebook has appeared bulletproof despite a series of scandals about the misuse of its giant social network. But the Silicon Valley company’s streak ended on Wednesday when it said that the accumulation of issues was starting to hurt its multibillion-dollar business — and that the costs are set to continue playing out for months.

This is true as far as it goes, particular when it comes to expenses: Facebook is on pace to increase its security and content review teams to 20,000 people, a three-fold increase in 18 months; that is why CEO Mark Zuckerberg warned on an earnings call last year:

I’ve directed our teams to invest so much in security on top of the other investments we’re making that it will significantly impact our profitability going forward, and I wanted our investors to hear that directly from me. I believe this will make our society stronger, and in doing so will be good for all of us over the long term. But I want to be clear about what our priority is. Protecting our community is more important than maximizing our profits.

What is much less clear is what effect, if any, Facebook’s controversies have had on the top line. There were three factors that for many years made Facebook a monster when it came to revenue growth:

  • The number of users was increasing
  • Ad load (the number of ads shown in the News Feed) was increasing
  • The price-per-ad was increasing

A year ago, though, Facebook stopped increasing ad load; as I have documented, this did result in an even sharper increase in the price paid per-ad, but it was still a retardant on growth.

Then, over the last year, Facebook’s user growth started to slow, and in the most-profitable North American region, has effectively plateaued. That, though, isn’t because of Facebook’s controversies: it is because the app has run out of people! The company has 241 million monthly active users in the US & Canada, 65% of the total population of 372 million (including children who aren’t supposed to have accounts before the age of 13).

Given that degree of nearly total penetration, what is more important when it comes to evaluating Facebook’s health is that there is no indication the company is losing users. Sure, the numbers in North America decreased by a million in Q4 2017, but now that million is back; I expect something similar when it comes to the million users the company lost in Europe when it required affirmative consent from users to continue using the app because of GDPR.

The fact of the matter is that nothing has happened to diminish Facebook’s moat when it comes to attracting and retaining users: the number one feature of a social network is how many people are on it, and for all intents and purposes everyone is on Facebook — whether they like it or not.

Interestingly, Facebook is working to deepen that moat even further with its focus on Groups. Zuckerberg said on the earnings call:

There are more than 200 million people that are members of meaningful groups on Facebook, and these are communities that, upon joining, they become the most important part of your Facebook experience and a big part of your real world social infrastructure. These are groups for new parents, for people with rare diseases, for volunteering, for military families deployed to a new base and more.

We believe there is a community for every one on Facebook. And these meaningful communities often spend online and offline and bring people together in person. We found that every great community has an engaged leader. But running a group can take a lot of time. So we have a road map to make this easier. That will enable more meaningful groups to get formed, which will help us to find relevant ones to recommend to you, and eventually achieve our five-year goal of helping 1 billion people be a part of meaningful communities.

Zuckerberg is referring to his 2017 manifesto Building a Global Community; it is a particularly attractive goal from Facebook’s perspective because it makes the product stickier than ever.

All that noted, the most important reason to view Facebook through the lens of the company’s products is that the sheer scale of Facebook the app makes it easy to lose site of the still substantial growth potential of those other products. Instagram in particular recently passed 1 billion users, which is an incredible number that is still less than half of Facebook the app’s total users; by definition Instagram has reached less than half of its addressable market.

Moreover, Instagram has not only been untouched by Facebook’s controversies, it is such a compelling product that, anecdotally speaking, most “Facebook-nevers” or “Facebook-quitters” readily admit to using the service daily. The app also hasn’t come close to reaching its monetization potential: while the feed carries the same ad load as Facebook, the SnapChat-inspired Stories format that has exploded in usage has barely been monetized; in fact, Facebook’s executives attributed some of the company’s slowing revenue growth to increased Stories usage (instead of the feed). From a purely financial perspective this is certainly a cause for concern, but from a strategic perspective it means that Instagram is in an even stronger position that it was previously. Remember, revenue and profit are lagging indicators, and the explosion in Instagram Stories is an extreme example of why that is such an important fact to keep in mind.

WhatsApp is increasingly compelling as well: not only does the app remain the dominant communications medium in much of the world, but the addition of WhatsApp Status updates and Stories dramatically increases the monetization potential of the service — a potential that Facebook hasn’t even started to realize.1

There is some degree of long-term risk when it comes to products: Facebook acquired both Instagram and WhatsApp, but the company should not be allowed to acquire another social network of similar size and velocity to those two, and I doubt they would be. That concern, though, is very far in the future: for now the product lens suggests that Facebook is as strong as ever.

Lens 3: Facebook’s Advertising Infrastructure

This lens takes the exact opposite perspective of Lens 2; looking at the company from a product perspective shows four different apps, but looking at the company from an advertising perspective shows a single integrated machine.

This was a point Facebook executives touched on repeatedly in last week’s earnings call. Here is Wehner (emphasis mine):

In terms of Facebook versus Instagram, they’re obviously both contributing to revenue growth. Instagram is growing more quickly and making an increasing contribution to growth. And we’ve been pleased with how Instagram is growing. Facebook and Instagram are really one ads ecosystem.

Zuckerberg:

We’re also making progress developing Stories into a great format for ads. We’ve made the most progress here on Instagram, but this quarter, we started testing Stories ads on Facebook too…

COO Sheryl Sandberg added:

Since we have so many different places where you have Stories formats in Instagram and WhatsApp and Facebook, as volume increases of the opportunity, advertisers get more interested.

Zuckerberg and Sandberg were obviously talking about the potential for advertising in Stories, but that potential is simply a repeat of what has already happened with Feed ads: Facebook spent years building out News Feed advertising — not simply the display and targeting technology but also the entire back-end apparatus for advertisers, connections with non-Facebook data sources and points-of-sale, relationships with ad buyers, etc. — and then simply plugged Instagram into that infrastructure.

The payoff of this integrated approach cannot be overstated. Instagram got to scale in terms of monetization years faster than they would have on their own, even as the initial product team had the freedom to stay focused on the user experience. Facebook the app benefited as well, because Instagram both increased the surface area for Facebook ad campaigns even as it increased Facebook’s targeting capabilities.

The biggest impact, though is on potential competition. It is tempting to focus on the “R” in “ROI” — the return on investment — and as I just noted Instagram + Facebook makes that even more attractive. Just as important, though, is the “I”; there is tremendous benefit to being a one-stop shop for advertisers, who can save time and money by focusing their spend on Facebook. The tools are familiar, the buys are made across platforms, and as Zuckerberg and Sandberg alluded to with regard to Stories, the ads themselves only need to be made once to be used across multiple platforms. Why even go to the trouble to advertise anywhere else?

This is why the advertising lens is perhaps the most useful when it comes to understanding just how strong Facebook’s business remains, and why the Instagram acquisition in particular was such a big deal. For all the discussion of Facebook the app’s lock-in, it is very reasonable to wonder if engagement is decreasing over time, particularly amongst young people, or if controversies may drive down usage — or worse. Were Instagram a separate company, advertisers might find themselves with no choice but to spread out their advertising to multiple companies, and once their advertising was diversified, it would be a much smaller step to target users on other networks like SnapChat or Twitter. As it stands there is no reason to leave Facebook the advertising platform, no matter what happens with Facebook the app.

Lens 4: Facebook’s Multiplying Moats

Facebook’s advertising moat may be its most important, and its network moat its strongest, but the company has actually added moats, particularly in the last year.

The first is GDPR; this may seem counter-intuitive, given that Facebook said last week the regulation cost them a million users, and that one of the factors that would hurt revenue growth was the increased controls the company was giving users when it comes to controlling their personal information. Keep in mind, though, that GDPR applies to everyone, not just Facebook, and as Sandberg noted on the call (emphasis mine):

Advertisers are still adapting to the changes, so it’s early to know the longer-term impact. And things like GDPR and other privacy changes that may happen from us or may happen with regulation could make ads more relevant. One thing that we know that’s not going to change is that advertisers are always looking for the highest ROI opportunity. And what’s most important in winning budget is our relative performance in the industry, and we believe we’ll continue to do very well on that.

I made this exact point previously:

While GDPR advocates have pointed to the lobbying Google and Facebook have done against the law as evidence that it will be effective, that is to completely miss the point: of course neither company wants to incur the costs entailed in such significant regulation, which will absolutely restrict the amount of information they can collect. What is missed is that the increase in digital advertising is a secular trend driven first-and-foremost by eyeballs: more-and-more time is spent on phones, and the ad dollars will inevitably follow. The calculation that matters, then, is not how much Google or Facebook are hurt in isolation, but how much they are hurt relatively to their competitors, and the obvious answer is “a lot less”, which, in the context of that secular increase, means growth.

Secondly, all of those costs that Facebook are incurring for security and content review that are reducing operating margin? Perhaps the stock market would feel better if they were characterized as moat expansion, because that’s exactly what they are: any would-be Facebook competitor is going to have to make a similar investment, and do it from a dramatically lower revenue base.

Moreover, just as Facebook benefits from scaling its ad infrastructure to all of its products, it can do the same with its security efforts. Zuckerberg stated:

More broadly, our strategy is to use Facebook’s computing infrastructure, business platforms and security systems to serve people across all of our apps…We’re using AI systems in our global community operations team to fight spam, harassment, hate speech, and terrorism across all of our apps to keep people safe. And this is incredibly useful for apps like WhatsApp and Instagram as it helps us manage the challenges of hyper-growth there more effectively.

This is why the lens with which you view Facebook matters so much: the exact same set of facts viewed from a financial perspective are a clear negative; from a moat perspective they are a clear positive.

Lens 5: Facebook’s Raison D’être

Needless to say, once you view Facebook through anything but a financial lens the health of the business is hard to argue with (and frankly, the finances went from phenomenal to fantastic, but it’s all relative). That’s why I can’t help but wonder if there is something more fundamental about both the collapse in Facebook’s stock and the general celebration that followed.

To return to the early years of Stratechery, it was striking how widespread Facebook skepticism was; I first tried to argue otherwise five years ago today, and in 2015 felt compelled to write The Facebook Epoch that begins like this:

I’m fond of saying that few companies are as underrated as Facebook is, especially in Silicon Valley. Admittedly, it seems strange to say such a thing about a $245 billion company with a trailing 12-month P/E ratio of 88, but that is Wall Street sentiment; in the tech bubble many seem to simply assume the company is ever on the brink of teetering “just like MySpace”, never mind the fact that the social network pioneer barely broke 100 million registered users, less than 10% of the number of active users Facebook attracted in a single day late last month. Or, as more sober minds may argue, sure, Facebook looks unstoppable today, but then again, Google looked unstoppable ten years ago when social seemingly came out of nowhere: surely the Facebook killer is imminent!

That sentiment sure seems to be back in full force!

At the risk of veering into broad-based psychoanalysis, I think a lot of the Facebook skepticism is because so much of the content seems so shallow and petty, or in the case of the last few years, actively malicious. How can such a product survive?

In fact, it survives for the very reason it exists: Facebook began in Zuckerberg’s Harvard dorm room by quite literally digitizing offline relationships that already existed, both in real life and in actual physical “facebooks”. Facebook is so powerful because of this direct connection to the real world: it is shallow and petty and sometimes malicious — and yes, often good — because we humans are shallow and petty and sometimes malicious — and yes, often good.

By extension, to insist that Facebook will die any day now is in some respects to suggest that humanity will cease to exist any day now; granted, it is a company and companies fail, but even if Facebook failed it would only be a matter of time before another Facebook rose to replace it.

That seems unlikely: for all of the company’s travails and controversies over the past few years, its moats are deeper than ever, its money-making potential not only huge but growing both internally and secularly; to that end, what is perhaps most distressing of all to would-be competitors is in fact this quarter’s results: at the end of the day Facebook took a massive hit by choice; the company is not maximizing the short-term, it is spending the money and suppressing its revenue potential in favor of becoming more impenetrable than ever.

“Utter disaster” indeed.

  1. There is Messenger as well; I am more dubious of its long-term monetization potential because its natural advertising space — status updates and stories — are basically what Facebook is [↩︎]

The European Commission Versus Android

To understand how Google ended up with a €4.3 billion fine and a 90-day deadline to change its business practices around Android, it is critical to keep one date in mind: July 2005.1 That was when Google acquired a still in-development mobile operating system called Android, and to put the acquisition in context, Steve Jobs was, at least publicly, “not convinced people want to watch movies on a tiny little screen”. He was, of course, referring to the iPod; Apple would go on to release an iPod with video playback a few months later, but the iPhone was still a year-and-a-half away from being revealed.

In other words, Android, at least at the beginning, wasn’t a response to Apple;2 the real target was Microsoft (and to a lesser extent Blackberry), which seemed poised to dominate smartphones just as they had the desktop. That was an untenable situation for Google; then Vice-President of Product Management Sundar Pichai wrote on the Google Public Policy blog about the company’s challenges on PCs:

Google believes that the browser market is still largely uncompetitive, which holds back innovation for users. This is because Internet Explorer is tied to Microsoft’s dominant computer operating system, giving it an unfair advantage over other browsers. Compare this to the mobile market, where Microsoft cannot tie Internet Explorer to a dominant operating system, and its browser therefore has a much lower usage.

What mattered to Google was access to end users: that is what makes the Aggregation flywheel turn. On PCs the company had succeeded through a combination of flat-out being better, the fact that it was very simple to visit a new URL (and make it your homepage), and deals with OEMs to set Google as the homepage from the beginning. All would be more difficult to achieve on mobile, at least mobile as it was understood in 2005: applications were notoriously difficult to find and install, and Microsoft and Blackberry had locked down their operating systems to a much greater extent than Microsoft had on the PC.

Thus the Android gambit: Google decided to take on Microsoft directly in mobile operating systems, and its most powerful tool would not be the quality of the operating system, but the business model. To that end, while Google did, naturally, retool Android’s user interface once the iPhone was announced, the business model remained Microsoft kryptonite: whereas Microsoft charged a per-device licensing fee, just as it had with Windows, Android would not only be free and open-source, Google would actually share search revenue derived from Android with OEMs that installed the operating system.

Of course Android also ended up being a much better experience than Windows Mobile in the post-iPhone world, and the deal was irresistible to OEMs flailing for a response to the iPhone: get a (somewhat) comparable (sort-of) touch-based operating system for free, and even make money after the initial sale! Indeed, not only did Android effectively kill Microsoft’s mobile efforts, it went on to take over the world via a massive ecosystem of device makers and mobile carriers that competed to drive down costs and increase distribution.

Android’s Success

That Android increases competition was the focus of Pichai’s — now the CEO of Google — latest blog post in response to the ruling:

Today, the European Commission issued a competition decision against Android, and its business model. The decision ignores the fact that Android phones compete with iOS phones, something that 89 percent of respondents to the Commission’s own market survey confirmed. It also misses just how much choice Android provides to thousands of phone makers and mobile network operators who build and sell Android devices; to millions of app developers around the world who have built their businesses with Android; and billions of consumers who can now afford and use cutting-edge Android smartphones. Today, because of Android, there are more than 24,000 devices, at every price point, from more than 1,300 different brands…

What Pichai doesn’t say is that this competition is not so much a feature as it was the point: open-sourcing Android commoditized smartphone development meaning anyone could enter, even as few were in a position to profit over time. That included Google, at least at the beginning, which was by design: remember, the point of Android was not to make money like Windows, it was to stop Windows or any other operating system from getting between Google and users. Venture capitalist Bill Gurley explained in a 2011 post entitled The Freight Train That Is Android:

Android, as well as Chrome and Chrome OS for that matter, are not “products” in the classic business sense. They have no plan to become their own “economic castles.” Rather they are very expensive and very aggressive “moats,” funded by the height and magnitude of Google’s castle [(search advertising)]. Google’s aim is defensive not offensive. They are not trying to make a profit on Android or Chrome. They want to take any layer that lives between themselves and the consumer and make it free (or even less than free). Because these layers are basically software products with no variable costs, this is a very viable defensive strategy. In essence, they are not just building a moat; Google is also scorching the earth for 250 miles around the outside of the castle to ensure no one can approach it. And best I can tell, they are doing a damn good job of it.

Indeed they were, but the strategy had a built-in problem: Android was, well, open source, and just as that helped Android spread, it could just as easily be forked into an initially compatible operating system that didn’t connect to Google’s services — the castle that Google was trying to protect all along. Google needed a wall for its moat, and found one in the Google Play Store.

The Google Play Store and Google Play Services

The Google Play Store, not unlike Android’s user interface, was a response to the iPhone, specifically the highly successful launch of the App Store in 2008. And while the Play Store often lagged the App Store when it came to cutting-edge apps, particularly in the early days, it quickly became one of Google’s most valuable services, both in terms of making Android useful as well as making Google money.

Note, though, that the Play Store is not a part of Android: it has always been closed-source and exclusive to Google’s version of Android, just like other Google services like Gmail, Maps, and YouTube. The problem Google had with all of those apps, though, was that they were updated with the operating system, and OEMs and carriers — who only made money when a device was initially sold — were not particularly incentivized to update the operating system.

Google’s solution was Google Play Services; first released in 2010 as a part of Android 2.2 Froyo, Google Play Services was distributed via the Play Store and provided an easily updatable API layer that would, in the initial version, allow Google to update its own apps independent of operating system updates. It was an elegant solution to a real problem inherent in the free-wheeling model Google had taken for Android distribution: widespread fragmentation. Soon all of Google’s apps were built on top of Google Play Services, and then, in 2012, Google started opening it up to developers.

The initial version was quite modest; here is the announcement on Google+:

At Google I/O we announced a preview of Google Play services, a new development platform for developers who want to integrate Google services into their apps. Today, we’re kicking off the full launch of Google Play services v1.0, which includes Google+ APIs and new OAuth 2.0 functionality. The rollout will cover all users on Android 2.2+ devices running the latest version of the Google Play store.

Over the next several years, though, Google devoted more and more of its effort — and its most interesting APIs, like location and maps and gaming services — to Google Play Services; meanwhile, whatever equivalent service was in the open-source version of Android was effectively frozen in time. The net result is incredibly significant to teasing out this case: Google Play Services silently shifted ever more apps from Android apps to Google Play apps; today, no Google app will function on open-source Android without extensive reworking, and the same applies to ever more 3rd-party apps as well.

That noted, it is hard, in my estimation, to see this as being an antitrust violation. The fact of the matter is that Google was addressing a legitimate problem in the Android ecosystem, and the company didn’t make any developer use Google Play Services APIs instead of the more basic ones still available even today.

The European Commission Case

The European Commission found Google guilty of breaching EU antitrust rules in three ways:

  • Illegally tying Google’s search and browser apps to the Google Play Store; to get the Google Play Store and thus a full complement of apps, OEMs have to pre-install Google search and Chrome and make them available within one screen of the home page.
  • Illegally paying OEMs to exclusively pre-install Google Search on every Android device they made.
  • Illegally barring OEMs that installed Google’s apps from selling any device that ran an Android fork.

Taken in isolation, these seem to run from least problematic to most problematic.

  • The Google Play Store has always been an exclusive Google app; it seems that Google ought to be able to distribute it exclusively as part of a bundle if it so chooses.
  • Pinning all revenue from Google Search to exclusivity on all devices quite obviously makes it very difficult for alternative search services to build share (as they lack access to pre-installs, one of the most effective channels for customer acquisition); this seems to be more of a Google Search dominance issue than an Android dominance issue though.
  • Predicating the availability of any of Google’s apps, including the Google Play Store, on OEMs not taking advantage of the open source nature of Android on devices that will not include Google apps seems much more problematic than Google insisting its apps be distributed in a bundle. The latter is Google’s prerogative; the former is dictating OEM actions just because Google can.

This is where the history of Android matters; before Google Play Services, the primary challenge in building a competitive fork of Android would have been convincing developers to upload their apps to a new app store (since Google would obviously not want to put its apps, including the Play Store, on said fork). That fork, though, never materialized because of Google’s contractual terms barring OEMs from selling any devices built on such a fork.

Today the situation is very different: that contractual limitation could go away tomorrow (or, more accurately, in 90 days), and it wouldn’t really matter because, as I explained above, many apps are no longer Android apps but are rather Google Play apps. To run on an Android fork is by no means impossible, but most would require more rework than simply uploading to a new App Store.

In short, in my estimation the real antitrust issue is Google contractually foreclosing OEMs from selling devices with non-Google versions of Android; the only way to undo that harm in 2018, though, would be to make Google Play Services available to any Android fork.

The Commission’s Remedies

To be sure, that’s not exactly what the European Commission ordered (in fact, “Google Play Services” does not appear a single time in the press release); the Commission seems to feel that the three issues do stand alone. That means that Google has to respond to each individually:

  • Google has to untie the Play Store from Search and the Chrome browser
  • Google has already stopped paying OEMs for portfolio-wide search exclusivity
  • Google can no longer stop OEMs from selling devices with Android forks

The most momentous by far is the first (despite the fact it is the weakest allegation, in my estimation). Samsung, or any other OEM, could in 90 days sell a device with Bing search only and the Google Play Store (where of course Google Search could be downloaded). This will likely accrue to consumers’ benefit: Microsoft, Google, and other providers will soon be bidding to be the default search option, and, given the commoditized nature of Android devices, it is likely that most of what they are willing to pay will go towards lower prices.

Still, it is an unsatisfying remedy: Google built Android for the express purpose of monetizing search, and to be denied that by regulatory edict feels off; Google, though, bears a lot of the blame for going too far with its contracts.

More broadly, the European Commission continues to be a bit too cavalier about denying companies — well, Google, mostly — the right to monetize the products they spend billions of dollars at significant risk to develop; this was my chief objection to last year’s Google Shopping case. In this case I narrowly come down on the Commission’s side almost by accident: I think Google acted illegally by contractually foreclosing Android competitors at a time when it might have made a difference, but I am concerned that the Commission’s publicly released reasoning doesn’t seem to grasp exactly how Android has developed, the choices Google made, and why.

That noted, I highly doubt Google would do anything differently: when it comes to the company’s goals, Android could not be a bigger success — if anything, this ruling is evidence of just how successful the product was.

  1. The exact date of the acquisition is unknown [↩︎]
  2. For those wondering, then-Google CEO Eric Schmidt didn’t join Apple’s Board of Directors until August 2006 [↩︎]

Intel and the Danger of Integration

Last week Brian Krzanich resigned as the CEO of Intel after violating the company’s non-fraternization policy. The details of Krzanich’s departure, though, ultimately don’t matter: his tenure was an abject failure, the extent of which is only now coming into view.

Intel’s Obsolete Opportunity

When Krzanich was appointed CEO in 2013 it was already clear that arguably the most important company in Silicon Valley’s history was in trouble: PCs, long Intel’s chief money-maker, were in decline, leaving the company ever more reliant on the sale of high-end chips to data centers; Intel had effectively zero presence in mobile, the industry’s other major growth area.

Still, I framed the situation that faced Krzanich as an opportunity, and drew a comparison to the challenges that faced the legendary Andy Grove three decades ago:

By the 1980s, though, it was the microprocessor business, fueled by the IBM PC, that was driving growth, while the DRAM business was fully commoditized and dominated by Japanese manufacturers. Yet Intel still fashioned itself a memory company. That was their identity, come hell or high water.

By 1986, said high water was rapidly threatening to drag Intel under. In fact, 1986 remains the only year in Intel’s history that they made a loss. Global overcapacity had caused DRAM prices to plummet, and Intel, rapidly becoming one of the smallest players in DRAM, felt the pain severely. It was in this climate of doom and gloom that Grove took over as CEO. And, in a highly emotional yet patently obvious decision, he once and for all got Intel out of the memory manufacturing business.

Intel was already the best microprocessor design company in the world. They just needed to accept and embrace their destiny.

Fast forward to the challenge that faced Krzanich:

It is into a climate of doom and gloom that Krzanich is taking over as CEO. And, in what will be a highly emotional yet increasingly obvious decision, he ought to commit Intel to the chip manufacturing business, i.e. manufacturing chips according to other companies’ designs.

Intel is already the best microprocessor manufacturing company in the world. They need to accept and embrace their destiny.

That article is now out of date: in a remarkable turn of events, Intel has lost its manufacturing lead. Ben Bajarin wrote last week in Intel’s Moment of Truth:

Not only has the competition caught Intel they have surpassed them. TSMC is now sampling on 7nm and AMD will ship their architecture on 7nm technology in both servers and client PCs ahead of Intel. For those who know their history, this is the first time AMD has ever beat Intel to a process node. Not only that, but AMD will likely have at least an 18 month lead on Intel with 7nm, and I view that as conservative.

As Bajarin notes, 7nm for TSMC (or Samsung or Global Foundries) isn’t necessarily better than Intel’s 10nm; chip-labeling isn’t what it used to be. The problem is that Intel’s 10nm process isn’t close to shipping at volume, and the competition’s 7nm processes are. Intel is behind, and its insistence on integration bears a large part of the blame.

Intel’s Integrated Model

Intel, like Microsoft, had its fortunes made by IBM: eager to get the PC an increasingly vocal section of its customer base demanded out the door, the mainframe maker outsourced much of the technology to third party vendors, the most important being an operating system from Microsoft and a processor from Intel. The impact of the former decision was the formation of an entire ecosystem centered around MS-DOS, and eventually Windows, cementing Microsoft’s dominance.

Intel was a slightly different story; while an operating system was simply bits on a disk, and thus easily duplicated for all of the PCs IBM would go on to sell, a processor was a physical device that needed to be manufactured. To that end IBM insisted on having a “second source”, that is, a second non-Intel manufacturer for Intel’s chips. Intel chose AMD, and licensed first the 8086 and 8088 designs that were in the original IBM PC, and later, again under pressure from IBM, the 80286 design; the latter was particularly important because it was designed to be upward compatible with everything that followed.

This laid the groundwork for Intel’s strategy — and immense profitability — for the next 35 years. First off, the dominance of Intel’s x86 design was assured thanks to its integration with DOS/Windows: specifically, DOS/Windows created a two-sided market of developers and PC users, and DOS/Windows ran on x86.

Microsoft and Intel were integrated in the PC value chain

However, thanks to its licensing deal with AMD, Intel wasn’t automatically entitled to all of the profits that would result from that integration; thus Intel doubled-down on an integration of its own: the design and manufacture of x86 chips. That is, Intel would invest huge sums of money into creating new and faster designs (the 386, the 486, the Pentium, etc.), and also invest huge sums of money into ever smaller and more efficient manufacturing processes that would push the limits of Moore’s Law. This one-two punch would ensure that, despite AMD’s license, Intel’s chips would be the only realistic choice for PC makers, allowing the company to capture the vast majority of the profits created by the x86’s integration with DOS/Windows.

Intel was largely successful. AMD did take the performance crown around the turn of the century with the Athlon 64, but the company was unable to keep up with Intel financially when it came to fabs, and Intel illegally leveraged its dominant position with OEMs to keep them buying mostly Intel parts; then, a few years later, Intel not only took back the performance lead with its Core architecture, but settled into the “tick-tock” strategy where it alternated new designs and new manufacturing processes on a regular schedule. The integration advantage was real.

TSMC’s Modular Approach

In the meantime there was a revolution brewing in Taiwan. In 1987, Morris Chang founded Taiwan Semiconductor Manufacturing Company (TSMC) promising “Integrity, commitment, innovation, and customer trust”. Integrity and customer trust referred to Chang’s commitment that TSMC would never compete with its customers with its own designs: the company would focus on nothing but manufacturing.

This was a completely novel idea: at that time all chip manufacturing was integrated a la Intel; the few firms that were only focused on chip design had to scrap for excess capacity at Integrated Device Manufacturers (IDMs) who were liable to steal designs and cut off production in favor of their own chips if demand rose. Now TSMC offered a much more attractive alternative, even if their manufacturing capabilities were behind.

In time, though, TSMC got better, in large part because it had no choice: soon its manufacturing capabilities were only one step behind industry standards, and within a decade had caught-up (although Intel remained ahead of everyone). Meanwhile, the fact that TSMC existed created the conditions for an explosion in “fabless” chip companies that focused on nothing but design. For example, in the late 1990s there was an explosion in companies focused on dedicated graphics chips: nearly all of them were manufactured by TSMC. And, all along, the increased business let TSMC invest even more in its manufacturing capabilities.

Integrated intel was competing with a competitive modular ecosystem

This represented into a three-pronged assault on Intel’s dominance:

  • Many of those new fabless design companies were creating products that were direct alternatives to Intel chips for general purpose computing. The vast majority of these were based on the ARM architecture, but also AMD in 2008 spun off its fab operations (christened GlobalFoundries) and became a fabless designer of x86 chips.
  • Specialized chips, designed by fabless design companies, were increasingly used for operations that had previously been the domain of general purpose processors. Graphics chips in particular were well-suited to machine learning, cryptocurrency mining, and other highly “embarrassingly parallel” operations; many of those applications have spawned specialized chips of their own. There are dedicated bitcoin chips, for example, or Google’s Tensor Processing Units: all are manufactured by TSMC.
  • Meanwhile TSMC, joined by competitors like GlobalFoundries and Samsung, were investing ever more in new manufacturing processes, fueled by the revenue from the previous two factors in a virtuous cycle.

Intel’s Straitjacket

Intel, meanwhile, was hemmed in by its integrated approach. The first major miss was mobile: instead of simply manufacturing ARM chips for the iPhone the company presumed it could win by leveraging its manufacturing to create a more-efficient x86 chip; it was a decision that evinced too much knowledge of Intel’s margins and not nearly enough reflection on the importance of the integration between DOS/Windows and x86.

Intel took the same mistaken approach to non general-purpose processors, particularly graphics: the company’s Larrabee architecture was a graphics chip based on — you guessed it — x86; it was predicated on leveraging Intel’s integration, instead of actually meeting a market need. Once the project predictably failed Intel limped along with graphics that were barely passable for general purpose displays, and worthless for all of the new use cases that were emerging.

The latest crisis, though, is in design: AMD is genuinely innovating with its Ryzen processors (manufactured by both GlobalFoundries and TSMC), while Intel is still selling varations on Skylake, a three year-old design. Ashraf Eassa, with assistance from a since-deleted tweet from a former Intel engineer, explains what happened:

According to a tweet from ex-Intel engineer Francois Piednoel, the company had the opportunity to bring all-new processor technology designs to its currently shipping 14nm technology, but management decided against it.

my post was actually pointing out that market stalling is more troublesome than Ryzen, It is not a good news. 2 years ago, I said that ICL should be taken to 14nm++, everybody looked at me like I was the craziest guy on the block, it was just in case … well … now, they know

— François Piednoël (@FPiednoel) April 26, 2018

The problem in recent years is that Intel has been unable to bring its major new manufacturing technology, known as 10nm, into mass production. At the same time, the issues with 10nm seemed to catch Intel off-guard. So, by the time it became clear that 10nm wouldn’t go into production as planned, it was too late for Intel to do the work to bring one of the new processor designs that was originally developed to be built on the 10nm technology to its older 14nm technology…

What Piednoel is saying in the tweet I quoted above is that when management had the opportunity to start doing the work to bring their latest processor design, known as Ice Lake (abbreviated “ICL” in the tweet), [to the 14nm process] they decided against doing so. That was likely because management truly believed two years ago that Intel’s 10nm manufacturing technology would be ready for production today. Management bet incorrectly, and Intel’s product portfolio is set to suffer as a result.

To put it another way, Intel’s management did not break out of the integration mindset: design and manufacturing were assumed to be in lockstep forever.

Integration and Disruption

It is perhaps simpler to say that Intel, like Microsoft, has been disrupted. The company’s integrated model resulted in incredible margins for years, and every time there was the possibility of a change in approach Intel’s executives chose to keep those margins. In fact, Intel has followed the script of the disrupted even more than Microsoft: while the decline of the PC finally led to The End of Windows, Intel has spent the last several years propping up its earnings by focusing more and more on the high-end, selling Xeon processors to cloud providers. That approach was certainly good for quarterly earnings, but it meant the company was only deepening the hole it was in with regards to basically everything else. And now, most distressingly of all, the company looks to be on the verge of losing its performance advantage even in high-end applications.

This is all certainly on Krzanich, and his predecessor Paul Otellini. Then again, perhaps neither had a choice: what makes disruption so devastating is the fact that, absent a crisis, it is almost impossible to avoid. Managers are paid to leverage their advantages, not destroy them; to increase margins, not obliterate them. Culture more broadly is an organization’s greatest asset right up until it becomes a curse. To demand that Intel apologize for its integrated model is satisfying in 2018, but all too dismissive of the 35 years of success and profits that preceded it.

So it goes.

AT&T, Time Warner, and the Need for Neutrality

The first thing to understand about the decision by a federal judge to approve AT&T’s acquisition of Time Warner, over the objection of the U.S. Department of Justice, is that it is very much in-line with the status quo: this is a vertical merger, and both the Department of Justice and the courts have defaulted towards approving such mergers for decades.1

Second, that there is an explosion of merger activity in and between the television production and distribution space is hardly a surprise: the Multichannel Video Programming Distributor (MVPD) business — that is, television distributed by cable, broadband, or satellite — has been shrinking for years now, and in a world where the addressable market is decreasing, the only avenues for growth are winning share from competitors, acquiring competitors, or vertically integrating.

Third, that last paragraph overstates the industry’s travails, at least in terms of television distribution, because most TV distributors are also internet service providers (ISPs), which means they are getting paid by consumers using the services disrupting MVPDs, including Netflix, Google, Facebook, and the Internet generally.

What was both unsurprising and yet odd about this case was the degree to which it was fought over point number two, with minimal acknowledgement of point number three. That is, it seems clear to me that AT&T made this acquisition with an eye on point number three, yet the government’s case was predicated on point number two; to that end, the government, in my eyes, rightly lost given the case they made. Whether they should have lost a better case is another question entirely.

Why AT&T Bought Time Warner

What is the point of a merger, instead of a contract? This is a question that always looms large in any acquisition, particularly one of this size: AT&T is paying $85 billion for Time Warner, and that’s an awfully steep price to simply hang out with movie stars.

The standard explanation for most mergers is “synergies”, the idea that there are significant cost savings from combining the operations of two companies; the reason this explanation is popular is because saving money is not an issue for antitrust, while the corresponding possibility — charging higher prices by achieving a stronger market position through consolidation — is. Such an explanation, though, is usually applied in the case of a horizontal merger, not a vertical one like AT&T and Time Warner.

To that end, AT&T was remarkably honest in its press release announcing the merger back in 2016:2

“With great content, you can build truly differentiated video services, whether it’s traditional TV, OTT or mobile. Our TV, mobile and broadband distribution and direct customer relationships provide unique insights from which we can offer addressable advertising and better tailor content,” [AT&T CEO Randall] Stephenson said. “It’s an integrated approach and we believe it’s the model that wins over time…

AT&T expects the deal to be accretive in the first year after close on both an adjusted EPS and free cash flow per share basis…Additionally, AT&T expects the deal to improve its dividend coverage and enhance its revenue and earnings growth profile.

Start with the second point: as I noted at the time, it’s not very sexy, but it matters to AT&T, a 34-year member of the Dividend Aristocrats, that is, a company in the S&P 500 that raised its dividend for 25 years straight or more. It’s a core part of AT&T’s valuation, but the company’s free cash flow has been struggling to keep up with its rising dividends. Time Warner will help significantly in this regard, as did the previous acquisition of DirecTV.

It is the first point, though, that is pertinent to this analysis: how exactly might Time Warner allow AT&T to “build truly differentiated video services”?

The Government’s Case

While the AT&T press release noted that those “truly differentiated video services” could be delivered via traditional TV, OTT, or mobile, the government’s case was entirely concerned with traditional TV. The original complaint stated:

Were this merger allowed to proceed, the newly combined firm likely would — just as AT&T/DirecTV has already predicted — use its control of Time Warner’s popular programming as a weapon to harm competition. AT&T/DirecTV would hinder its rivals by forcing them to pay hundreds of millions of dollars more per year for Time Warner’s networks, and it would use its increased power to slow the industry’s transition to new and exciting video distribution models that provide greater choice for consumers. The proposed merger would result in fewer innovative offerings and higher bills for American families.

The idea is that AT&T could leverage its ownership of DirecTV to demand higher prices for Turner networks from other MVPDs, because if the MVPDs refused to pay customers would be driven to switch to DirectTV. The problem is that, as was easily calculable, this makes no economic sense: the amount of money AT&T would lose by blacking out Turner would almost certainly outweigh whatever gains it might accrue. The judge agreed, and that was that.

AT&T’s Real Goals

Remember, though, that AT&T did not limit its options to traditional TV: what is far more compelling are the possibilities Time Warner content presents for OTT and mobile. The question is not what AT&T can do to increase the revenue potential of Time Warner content (which was the government’s focus), but rather what Time Warner content can do to increase the potential of AT&T’s services, particularly mobile.

Forgive the long excerpt, but I covered this angle at length in a Daily Update when the deal was announced:

AT&T’s core wireless business is competing in a saturated market with few growth prospects. Apple’s gift to the wireless industry of customers demanding high-priced data plans has largely run its course, with AT&T perhaps the biggest winner: the company acquired significant market share even as it increased its average revenue per user for nearly a decade, primarily thanks to the iPhone. Now, though, most everyone has a smartphone and, more pertinently, a data plan…

The implication of a saturated market is that growth is increasingly zero sum, which presents both a problem and an opportunity for AT&T. The problem is primarily T-Mobile: fueled by the massive break-up fee paid by AT&T for the aforementioned failed acquisition, T-Mobile has embarked on an all-out assault against the incumbent wireless carriers, and AT&T has felt the pain the most, recording a negative net change in postpaid wireless customers for eight straight quarters. Unable or unwilling to compete with T-Mobile on price, AT&T needs a differentiator, ideally one that will not only forestall losses but actually lead to gains.

At first glance this doesn’t explain the Time Warner acquisition either: per my point above these are two very different companies with two very different strategic views of content. A distributor in a zero-sum competition for subscribers (like AT&T) has a vertical business model: ideally there should be services and content that are exclusive to the distributor, thus securing customers. Time Warner, though, is a content company, which means it has a horizontal business model: content is made once and then monetized across the broadest set of potential customers possible, taking advantage of content’s zero marginal cost. The assumption of this sort of horizontal business model underlay Time Warner’s valuation; to suddenly make Time Warner’s content exclusive to AT&T would be massively value destructive (this is a reality often missed by suggestions that Apple, for example, should acquire content companies to differentiate its hardware).

AT&T, however, may have found a loophole: zero rating. Zero rating is often conflated with net neutrality, but unlike the latter, zero rating does not entail the discriminatory treatment of data; it just means that some data is free (sure, this is a violation of the idea of net neutrality, but this is why I was critical of the narrow focus on discriminatory treatment of data by net neutrality advocates). AT&T is already using zero rating to push DirecTV:

This is almost certainly the plan for Time Warner content as well: sure, it will continue to be available on all distributors, but if you subscribe to AT&T you can watch as much as you want for free; moreover, this offering is one that is strengthened by secular trends towards cord-cutting and mobile-only video consumption. If those trends continue on their current path AT&T will not only strengthen the moat of its wireless service against T-Mobile but maybe even start to steal share.

That this point never came up in the government’s case, and, by extension, the judge’s ruling, is truly astounding.

That noted, it is very fair to wonder why exactly the Department of Justice sued to block this acquisition: President Trump was very outspoken in his opposition to this deal and even more outspoken in his antipathy towards Time Warner-owned CNN. At the same time, Makan Delrahim, the Assistant Attorney General for Antitrust who led the case, didn’t see a problem with the merger before his appointment. That the government’s complaint rested on both the most obvious angle and, from AT&T’s perspective, the least important, suggests a paucity of rigor in the prosecution of this case; it is very reasonable to wonder if the order to oppose the merger came from the top, and that the easiest case was the obvious out.

The Neutrality Solution

Thus we are in the unfortunate scenario where a bad case by the government has led to, at best, a merger that was never examined for its truly anti-competitive elements, and at worst, bad law that will open the door for similar tie-ups. To be sure, it is not at all clear that the government would have won had they focused on zero rating: there is an obvious consumer benefit to the concept — that is why T-Mobile leveraged it to such great effect! — and the burden would have been on the government to show that the harm was greater.

The bigger issue, though, is the degree to which laws surrounding such issues are woefully out-of-date. Last fall I argued that Title II was the wrong framework to enforce net neutrality, even though net neutrality is a concept I absolutely support; I came to that position in part because zero rating was barely covered by the FCC’s action.3

What is clearly needed is new legislation, not an attempt to misapply ancient regulation in a way that is trivially reversible. Moreover, AT&T has a point that online services like Google and Facebook are legitimate competitors, particularly for ad dollars; said regulation should address the entire sector. To that end I would focus on three key principles:

  • First, ISPs should not purposely slow or block data on a discriminatory basis. I am not necessarily opposed to the concept of “fast lanes”, as I believe that offers significant potential for innovative services, although I recognize the arguments against them; it should be non-negotiable, though, that ISPs cannot purposely disfavor certain types of content.
  • Second, and similarly, dominant internet platforms should not be allowed to block any legal content from their services. At the same time, services should have discretion in monetization and algorithms; that anyone should be able to put content on YouTube, for example, does not mean that one has a right to have Google monetize it on their behalf, or surface it to people not looking for it.
  • Third, ISPs should not be allowed to zero-rate their own content, and platforms should not be allowed to prioritize their own content in their algorithms. Granted, this may be a bit extreme; at a minimum there should be strict rules and transparency around transfer pricing and a guarantee that the same rates are allowed to competitive services and content.

The reality of the Internet, as noted by Aggregation Theory, is increased centralization; meanwhile, the impact on the Internet on traditional media is an inexorable drive towards consolidation. Our current laws and antitrust jurisprudence are woefully unprepared to deal with this reality, and a new law guaranteeing neutrality is the best solution.

  1. Whether or not the presumption that vertical mergers are not anti-competitive is a worthwhile, albeit separate, discussion [↩︎]
  2. To be fair, the company also mentioned synergies, but it was hardly the point of the press release. [↩︎]
  3. The FCC said it would take it case-by-case, and did argue in the waning days of the Obama administration that zero rating one’s own services as AT&T is clearly trying to do was a violation, but that was never tested in court and was quickly rolled back [↩︎]

The Scooter Economy

As I understand it, the proper way to open an article about electric scooters is to first state one’s priors, explain the circumstances of how one came to try scooters, and then deliver a verdict. Unfortunately, that means mine is a bit boring: while most employing this format wanted to hate them,1 I was pretty sure scooters would be awesome — and they were!2

For me the circumstances were a trip to San Francisco; I purposely stayed at a hotel relatively far from where most of my meetings were, giving me no choice but to rely on some combination of scooters, e-bikes, and ride-sharing services. The scooters were a clear winner: fast, fun, and convenient — as long as you could find one near you. The city needs five times as many.

So, naturally, San Francisco banned them, at least temporarily: companies will be able to apply for their share of a pool of a mere 1,250 permits; that number may double in six months, but for now the scooter-riding experience will probably be more of a novelty, not something you can rely on. In fact, by the end of my trip, if I were actually in a rush, I knew to use a ride-sharing service.

It’s no surprise that ride-sharing services have higher liquidity: San Francisco is a car-friendly town. The city has a population of 884,363 humans and 496,843 vehicles, mostly in the city’s 275,000 on-street parking spaces. Granted, most of the Uber and Lyft drivers come from outside the city, but there is no congestion tax to deter them.

The result is an urban area stuck on a bizarre local maxima: most households have cars, but rarely use them, particularly in the city, because traffic is bad and parking is — relative to the number of cars — sparse; the alternative is ride-sharing, which incurs the same traffic costs but at least doesn’t require parking. And yet, San Francisco, for now anyways, will only allow about 60 parking spaces-worth of scooters onto the streets.

Everything as a Service

This is hardly the forum to discuss the oft-head-scratching politics of tech’s de facto capital city, and I can certainly see the downside of scooters, particularly the haphazard way with which they are being deployed; in an environment built for cars scooters get in the way.

It’s worth considering, though, just how much sense dockless scooters make: the concept is one of the purest manifestations of what I referred to in 2016 as Everything as a Service:

What happens, though, if we apply the services business model to hardware? Consider an airplane: I fly thousands of miles a year, but while Stratechery is doing well, I certainly don’t own my own plane! Rather, I fly on an airplane that is owned by an airline that is paid for in part through some percentage of my ticket cost. I am, effectively, “renting” a seat on that airplane, and once that flight is gone I own nothing other than new GPS coordinates on my phone.

Now the process of buying an airplane ticket, identifying who I am, etc. is far more cumbersome than simply hopping in my car — there are significant transaction costs — but given that I can’t afford an airplane it’s worth putting up with when I have to travel long distances. What happens, though, when those transaction costs are removed? Well, then you get Uber or its competitors: simply touch a button and a car that would have otherwise been unused will pick you up and take you where you want to go, for a price that is a tiny fraction of what the car cost to buy in the first place. The same model applies to hotels — instead of buying a house in every city you visit, simply rent a room — and Airbnb has taken the concept to a new level by leveraging unused space.

The enabling factor for both Uber and Airbnb applying a services business model to physical goods is your smartphone and the Internet: it enables distribution and transactions costs to be zero, making it infinitely more convenient to simply rent the physical goods you need instead of acquiring them outright.

What is striking about dockless scooters — at least when one is parked outside your door! — is that they make ride-sharing services feel like half-measures: why even wait five minutes, when you can just scan-and-go? Steve Jobs described computers as bicycles of the mind; now that computers are smartphones and connected to the Internet they can conjure up the physical equivalent as well!

Indeed, the only thing that could make the experience better — for riders and for everyone else — would be dedicated lanes, like, for example, the 900 miles worth of parking spaces in San Francisco. To be sure, the city isn’t going to make the conversion overnight, or, given the degree to which San Francisco is in thrall to homeowners, probably ever, but that is particularly a shame in 2018: venture capitalists are willing to fund the entire thing, and I’m not entirely sure why.

Missing Moats

Late last month came word that Sequoia Capital was leading a $150 million funding round for Bird, one of the electric scooter companies, valuing the company at $1 billion; a week later came reports that GV was leading a $250 million investment in rival Lime.

One of the interesting tidbits in Axios’s reporting on the latter was that each Lime scooter is used on average between 8 and 12 times a day; plugging that number into this very useful analysis of scooter-sharing unit costs suggests that the economics of both startups are very strong (certainly the size of the investments — and the quality of the investors — suggests the same).

The key word in that sentence, though, is “both”: what, precisely, might make Bird and Lime, or any of their competitors, unique? Or, to put it in business parlance, where is the moat? This is where the comparison to ride-sharing services is particularly instructive; I explained back in 2014 why there was more of a moat to be had in ride-sharing than most people thought:

  • There is a two-sided network between drivers and riders
  • As one service gains share, its increased utility of drivers will restrict liquidity on the other service, favoring the larger player
  • Riders will, all things being equal, use one service habitually

This leads to winner-take-all dynamics in a particular geographic area; then, when it comes times to launch in new areas, travelers and brand will give the larger service a head start.

To be sure, these interactions are complicated, and not everything is equal (see, for example, the huge amounts of share Lyft took last year thanks to Uber’s self-inflicted crises). It is that complication, though, and the fact it is exponentially more difficult to build a two-sided network (instead of, say, plopping a bunch of scooters on the street), that creates the conditions for a moat: the entire point of a moat is that it is hard to build.

Uber’s Self-Driving Mistake

This is why I have long maintained that the second-biggest mistake3 former Uber CEO Travis Kalanick made was the company’s head-first plunge into self-driving cars. On a surface level, the logic is obvious: Uber’s biggest cost is the driver, which means getting rid of them is an easy route to profitability — or, should someone else deploy self-driving cars first, then Uber could be undercut in price.

The mistake in Kalanick’s thinking is two-fold:

  • First, up-and-until the point that self-driving cars are widely available — that is, not simply invented, but built-and-deployed at scale — Uber’s drivers are its biggest competitive advantage. Kalanick’s public statements on the matter hardly evinced understanding on this point.
  • Second, bringing self-driving cars to market would entail huge amounts of capital investment. For one, this means it would be unlikely that Google, a company that rushes to reassure investors when it loses tens of basis points in margin, would do so by itself, and for another, whatever companies did make such an investment would be highly incentivized to maximize utilization of said investment as soon as possible. That means plugging into the dominant transportation-as-a-service network, which means partnering with Uber.

My contention is that Uber would have been best-served concentrating all of its resources on its driver-centric model, even as it built relationships with everyone in the self-driving space, positioning itself to be the best route to customers for whoever wins the self-driving technology battle.

Uber’s Second Chance

Interestingly, scooters and their closely-related cousin, e-bikes, may give Uber a second chance to get this right. Absent two-sided network effects, the potential moats for, well, self-riding scooters and e-bikes are relatively weak: proprietary technology is likely to provide short-lived advantages at best, and Bird and Lime have plenty of access to capital. Both are experimenting with “charging-sharing”, wherein they pay people to charge the scooters in their homes, but both augment that with their own contractors to both charge vehicles and move them to areas with high demand.

What remains under-appreciated is habit: your typical tech first-adopter may have no problem checking multiple apps to catch a quick ride, but I suspect most riders would prefer to use the same app they already have on their phone. To that end, there is certainly a strong impetus for Bird and Lime to spread to new cities, simply to get that first-app-installed advantage, but this is where Uber has the biggest advantage of all: the millions of people who already have the Uber app.

To that end, I thought Uber’s acquisition of Jump Bikes was a good idea, and scooters should be next (an acquisition of Bird or Lime may already be too pricey, but Jump has a strong technical team that should be able to get an Uber-equivalent out the door soon). The Uber app already handles multiple kinds of rides; it is a small step to handling multiple kinds of transportation — a smaller step than installing yet another app.

More Tech Surplus

More generally, in a world where everything is a service, companies may have to adapt to shallower moats than they may like. If you squint, what I am recommending for Uber looks a bit like a traditional consumer packaged goods (CPG) strategy: control distribution (shelf-space | screen-space) with a few dominant products (e.g. TIDE | UberX) that provide leverage for new offerings (e.g. Swiffer | Jump Bikes). The model isn’t nearly as strong, but there may be other potential lock-ins, particularly in terms of exclusive contracts with cities and universities.

Still, that is hardly the sort of dominance that accrues to digital-only aggregators like Facebook or Google, or even Netflix; the physical world is much harder to monopolize. That everything will be available as a service means a massive increase in efficiency for society broadly — more products will be available to more people for lower overall costs — even as the difficulty in digging moats means most of that efficiency becomes consumer surplus. And, as long as venture capitalists are willing to foot the bill, cities like San Francisco should take advantage.

I wrote a follow-up to this article in this Daily Update.

  1. That article is perhaps more revealing than the author appreciated [↩︎]
  2. Note: this article is going to focus on San Francisco for simplicity’s sake, although the broader points have nothing to do with San Francisco specifically; I am aware that the transportation situation is different in different cities — I do live in a different country, after all, in a city with fantastic public transportation and a plethora of personal transportation options. [↩︎]
  3. The first was not buying Lyft [↩︎]

The Cost of Developers

Yesterday saw three developer-related announcements, two from Apple, and one from Microsoft. The former came as part of Apple’s annual Worldwide Developers Conference keynote:

  • The iOS App Store, which turns 10 next month, serves 500 million weekly visitors, and as of later this week will have earned developers over $100 billion.
  • Sometime next year developers will be able to write apps for the Mac using iOS user interface frameworks (known as UIKit).

Microsoft, meanwhile, for the second time in three years, outshone Apple’s keynote with a massive acquisition. From the company’s press release:

Microsoft Corp. on Monday announced it has reached an agreement to acquire GitHub, the world’s leading software development platform where more than 28 million developers learn, share and collaborate to create the future. Together, the two companies will empower developers to achieve more at every stage of the development lifecycle, accelerate enterprise use of GitHub, and bring Microsoft’s developer tools and services to new audiences.

“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Satya Nadella, CEO, Microsoft. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

Under the terms of the agreement, Microsoft will acquire GitHub for $7.5 billion in Microsoft stock.

Developers can be quite expensive indeed!

Platform-Developer Symbiosis

Over the last few weeks, particularly in The Bill Gates Line, I have been exploring the differences between aggregators and platforms; while aggregators generally harvest already produced content or goods, developers leverage the platform to create something entirely new.

Platforms facilitate while aggregators intermediate

This results in a symbiosis between developers and platforms: from a technical perspective, platforms provide the fundamental building blocks (i.e. application program interfaces, or APIs) necessary for developers to build new experiences, and from a marketing perspective, those new experiences give customers a reason to buy the platform in the first place, or to upgrade.

The degree to which applications drive adoption of the underlying platform can, of course, vary; unsurprisingly the monetization potential of the platform relative to developers varies in a correlated way. Traditional Windows, for example, provided very little end user functionality; what made it so valuable were all of the applications built on top of its open platform.

Windows was an open platform

Here “open” means two things: first, the Windows API was available to anyone to build on, and two, developers built relationships directly with end users, including payment. This led to many huge software companies and, in 2003, to the creation of a platform on top of Windows: Valve’s Steam.

What Valve realized is that playing a game is only one part of the overall customer experience; the experience of discovering and buying the game matters as well, as does the installation and upgrade process. Moreover, these customer pain points were developer pain points as well; the original impetus to develop Steam, for example, was the difficulty in getting players to upgrade en masse, something that was essential for games in which players competed online. And, while Valve is a private company and has never announced Steam’s revenue numbers, reports suggest the platform generates billions of dollars a year.

Even that, though, pales in comparison to the iOS App Store: Apple took Steam’s app store idea and integrated it with the platform, such that iOS users and developers had no choice but to use Apple’s owned-and-operated distribution channel, with all of the various limitations and costs — 30%, to be precise — that that entailed.

The iPhone platform with an intermediation layer

Apple was able to accomplish this first and foremost because the underlying products — the iPhone and iPad — inspired demand in their own right, independent of applications. Apple had the users that developers needed to make money.

Second, the App Store, like Steam before it, really was a better experience that drove more downloads and purchases by end users. This meant that developing for iOS wasn’t simply attractive because of the number of users, but also because those users were willing to buy more than they would have on another platform.

Third — and this applies to Steam as well — the App Store dramatically lowered the barriers to entry for developers; this led to more apps, which attracted more users, which led to more apps, both locking in apps as a competitive advantage and also ensuring that no one app had outsized power (leaving Apple free to restrict Steam-like competitors by fiat).

Apple’s Platform Announcements

This frames the two Apple announcements I noted above. Start with the news of $100 billion for iOS developers: that means that Apple has collected around $40 billion, and at a very high margin to boot.

Moreover, the vast majority of Apple’s announcements were, if anything, about competing with those developers: the first new app announced, Measure, should immediately wipe out the only obviously useful Augmented Reality apps in the store. Apple also announced a new Podcasts app for Watch, update News, Stocks, and Voice Memo apps, and the only third party demos were about how one of the largest software companies there is — Adobe — would be supporting Apple’s preferred 3D-image format. And why not! The implication of owning all of those high-value users is that, on iOS anyways, developers are cheap.

The Mac, though, is a different story: the platform is far smaller than the iPhone; that there remain a number of high quality independent software vendors supporting the Mac is a testament to how valuable it is for developers to be able to build direct relationships with customers that can span years and multiple transactions. Still, there seems little question that the number of Mac apps is, if not trending in the wrong direction, certainly not growing in any meaningful way; there simply aren’t enough users to entice developers.

That means Apple’s approach has to be very different from iOS: instead of dictating terms to developers, Apple announced that it is in the middle of a multi-year project to make it easier to port iOS apps to the Mac. This is, in a fashion, Apple paying for Mac apps; no, the money isn’t going to developers, but Apple is voluntarily taking on a much greater portion of the porting workload. Developers are much more expensive when you don’t have nearly as many users.

The Cost of GitHub

Still, whatever it is costing Apple to build this porting framework, it surely is a lot less than $7.5 billion, the price Microsoft is paying for GitHub. Then again, at first glance, it may not be clear what the point of comparison is.

Go back to Windows: Microsoft had to do very little to convince developers to build on the platform. Indeed, even at the height of Microsoft’s antitrust troubles, developers continued to favor the platform by an overwhelming margin, for an obvious reason: that was where all the users were. In other words, for Windows, developers were cheap.

That is no longer the case today: Windows remains an important platform in the enterprise and for gaming (although Steam, much to Microsoft’s chagrin, takes a good amount of the platform profit there), but the company has no platform presence in mobile, and is in second place in the cloud. Moreover, that second place is largely predicated on shepherding existing corporate customers to cloud computing; it is not clear why any new company — or developer — would choose Microsoft.

This is the context for thinking about the acquisition of GitHub: lacking a platform with sufficient users to attract developers, Microsoft has to “acquire” developers directly through superior tooling and now, with GitHub, a superior cloud offering with a meaningful amount of network effects. The problem is that acquiring developers in this way, without the leverage of users, is extraordinarily expensive; it is very hard to imagine GitHub ever generating the sort of revenue that justifies this purchase price.

Again, though, GitHub revenue is not the point; Microsoft has plenty of revenue. What it also has is a potentially fatal weakness: no platform with user-based leverage. Instead Microsoft is betting that a future of open-source, cloud-based applications that exist independent of platforms will be a large-and-increasing share of the future, and that there is room in that future for a company to win by offering a superior user experience for developers directly, not simply exerting leverage on them.

This, by the way, is precisely why Microsoft is the best possible acquirer for GitHub, a company that, having raised $350 million in venture capital, was possibly not going to make it as an independent entity. Any company with a platform with a meaningful amount of users would find it very hard to resist the temptation to use GitHub as leverage; on the other side of the spectrum, purely enterprise-focused companies like IBM or Oracle would be tempted to wring every possible bit of profit out of the company.

What Microsoft wants is much fuzzier: it wants to be developers’ friend, in large part because it has no other option. In the long run, particularly as Windows continues to fade, the company will be ever more invested in a world with no gatekeepers, where developer tools and clouds win by being better on the merits, not by being able to leverage users.

That, though, is exactly why Microsoft had to pay so much: buying in directly is a whole lot more expensive than using leverage, which can produce equivalent — or better! — returns for much less investment.

The Bill Gates Line

Two of the more famous military sayings are “Generals are always preparing to fight the last war”, and “Never interrupt your enemy while he is making a mistake.” I thought of the latter at the conclusion of last Sunday’s 60 Minutes report on Google:

Google declined our request for an interview with one of its executives for this story, but in a written response to our questions, the company denied it was a monopoly in search or search advertising, citing many competitors including Amazon and Facebook. It says it does not make changes to its algorithm to disadvantage competitors and that, “our responsibility is to deliver the best results possible to our users, not specific placements for sites within our results. We understand that those sites whose ranking falls will be unhappy and may complain publicly.”

The 60 Minutes report was not exactly fair-and-balanced; it featured an anti-tech-monopoly crusader1, an anti-tech-monopoly activist, an anti-tech-monopoly regulator, and Yelp CEO Jeremy Stoppelman. And, in what seems highly unlikely to have been a coincidence, Yelp this week filed a new antitrust complaint in the EU against Google. To be sure, just because a report was biased does not mean it was wrong; while I am a bit skeptical of the EU’s antitrust case against Google Shopping, the open case about Android seems pretty clear-cut. Neither, though, is Yelp’s direct concern.

Yelp’s Case Against Google

This is from a blog post about the 60 Minutes feature:

Yelp did participate in the piece because Google is doing the opposite of “delivering the best results possible,” and instead is giving its own content an unlawful advantage. We’ve made a video to explain exactly how Google puts its own interests ahead of consumers in local search, which you can watch here:

Yelp’s position, at least in this video, appears to be that Google’s answer box is anticompetitive because it only includes reviews and ratings from Google; presumably the situation could be resolved were Google to use sources like Yelp. There are three problems with this argument, though:

  • First, the answer box originally included content scraped from sources like Yelp and other vertical search sites; under pressure from the FTC, driven in part by complaints from Yelp and other vertical search engines, Google agreed to stop doing so in 2013.2
  • Second, in a telling testament to the power of being on top of search results, Google’s ratings and reviews have improved considerably in the two years since that video was posted; this isn’t a static market (to be sure, this is an argument that could be used on both sides).
  • Third — and this is the point of this article — what Yelp seems to want will only serve to make Google stronger.

No wonder Google declined the request for an interview.

The Bill Gates Line

Over the last few weeks I have been exploring what differences there are between platforms and aggregators, and was reminded of this anecdote from Chamath Palihapitiya in an interview with Semil Shah:

Semil Shah: Do you see any similarities from your time at Facebook with Facebook platform and connect, and how Uber may supercharge their platform?

Chamath: Neither of them are platforms. They’re both kind of like these comical endeavors that do you as an Nth priority. I was in charge of Facebook Platform. We trumpeted it out like it was some hot shit big deal. And I remember when we raised money from Bill Gates, 3 or 4 months after — like our funding history was $5M, $83 M, $500M, and then $15B. When that 15B happened a few months after Facebook Platform and Gates said something along the lines of, “That’s a crock of shit. This isn’t a platform. A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it’s a platform.”

By this measure Windows was indeed the ultimate platform — the company used to brag about only capturing a minority of the total value of the Windows ecosystem — and the operating system’s clear successors are Amazon Web Services and Microsoft’s own Azure Cloud Services. In all three cases there are strong and durable businesses to be built on top.

A drawing of Platform Businesses Attract Customers by Third Parties
From Tech’s Two Philosophies

Once a platform dips under the Bill Gates Line, though, the long-term potential of a business built on a “platform” starts to decline. Apple’s App Store, for example, has all of the trappings of a platform, but Apple quite clearly captures the vast majority of the overall ecosystem, both because of the profitability of the iPhone and also because of its control of App Store economics; the paucity of strong and durable businesses on the App Store is a natural outgrowth of that.

The App Store intermediates 3rd parties and users

Note that Apple’s ability to control the economics of its developers comes from intermediating the relationship of those developers with customers.

Aggregators, Not Platforms

Facebook and Google take this intermediation to the extreme, leveraging their ability to drive discovery of the sheer abundance of information on their network and the Internet broadly:

A drawing of Aggregators Own Customer Relationships and Suppliers Follow
In the aggregator business model the aggregator owns customers and suppliers follow

It follows that Facebook and Google’s “platforms” not only don’t meet the Bill Gates Line, they don’t even register on the graph: they are the purest expression of aggregators. From my original formulation:

The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be aggregated at scale leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

The result is the shift in value predicted by the Conservation of Attractive Profits. Previous incumbents, such as newspapers, book publishers, networks, taxi companies, and hoteliers, all of whom integrated backwards, lose value in favor of aggregators who aggregate modularized suppliers — which they often don’t pay for — to consumers/users with whom they have an exclusive relationship at scale.

This is ultimately the most important distinction between platforms and aggregators: platforms are powerful because they facilitate a relationship between 3rd-party suppliers and end users; aggregators, on the other hand, intermediate and control it.

Moreover, at least in the case of Facebook and Google, the point of integration in their respective value chains is the network effect. This is what I was trying to get at last week in The Moat Map with my discussion of the internalization of network effects:

  • Google has had the luxury of operating in an environment — the world wide web — that was by default completely open. That let the best technology win, and that win was augmented by the data that comes from serving an ever-increasing portion of the market. The end result was the integration of end users and the data feedback cycle that made Google search better and better the more it was used.
  • Facebook’s differentiator, meanwhile, is the relationships between friends and family; the company has subsequently integrated that network effect with consumer attention, forcing all of the content providers to jostle for space in the Newsfeed as pure commodities.

It’s worth noting, by the way, why it was that Facebook could come to be a rival to Google in the first place; specifically, Facebook had exclusive data — those relationships and all of the behavior on Facebook’s site that resulted — that Google couldn’t get to. In other words, Facebook succeeded not by being a part of Google, but by being completely separate.

Succeeding in a World of Aggregators

This gets at why I find Yelp’s complaints a bit besides the point: the company seems to be expending an awful lot of energy to regain the right to give Google the content Yelp worked hard to acquire. There is revenue there, of course, just as there is in the production of commodities generally, but without a sustainable cost advantage it’s not the best route to building a strong and durable business.

Of course that is the bigger problem: I noted above that Google’s library of ratings and reviews has grown substantially over the past few years; users generating content are the ultimate low-cost supplier, and losing that supply to Google is arguably a bigger problem for Yelp than whatever advertising revenue it can wring out from people that would click through on a hypothetical Google Answer Box that used 3rd-party sources. And, it should be noted, that Yelp’s entire business is user-generated reviews: they and similar vertical sites are likely to do a far better job of generating, organizing, and curating such data.

Still, I can’t help but wonder whether or not Yelp’s problem is not that Google is using its own content in the Answer Box, but rather the Answer Box itself; which of these set of results would be better for Yelp’s business, even in a hypothetical world where Answer Box content comes from Yelp?

Yelp would get more visitors without the answer box

Presuming that the answer is the image on the right — driving users to Yelp is both better for the bottom line and better for content generation, which mostly happens on the desktop — and it becomes clear that Yelp’s biggest problem is that the more useful Google is — even if it only ever uses Yelp’s data! — the less viable Yelp’s business becomes. This is exactly what you would expect in an aggregator-dominated value chain: aggregators completely disintermediate suppliers and reduce them to commodities.

To that end, this is why the best strategies entail business models that avoid Google and Facebook completely: look no further than Amazon, which last month stopped buying Google Shopping ads, something the company can afford to do given that half of shoppers start their product searches on Amazon. To be sure, Amazon is plenty powerful in its own right, but it is a hard-to-ignore example of Google’s favorite argument that “competition is only a click away.”

Yelp Versus Google

Still, I have sympathy for Yelp’s position; Stoppelman told 60 Minutes:

If I were starting out today, I would have no shot of building Yelp. That opportunity has been closed off by Google and their approach…because if you provide great content in one of these categories that is lucrative to Google, and seen as potentially threatening, they will snuff you out.

Stoppelman is right, but the reason is perhaps less nefarious than it seems; the 60 Minutes report explained why in the voiceover:

Yelp and countless other sites depend on Google to bring them web traffic — eyeballs for their advertisers.

Yelp, like many other review sites, has deep roots in SEO — search-engine optimization. Their entire business was long predicated on Google doing their customer acquisition for them. To the company’s credit it has become a well-known brand in its own right, and now gets around 70% of its visits via its mobile app. Those visits are very much in the Amazon model I highlighted above: users are going straight to Yelp and bypassing Google directly.

That, though, isn’t great for Google! It seems a bit rich that Yelp should be free to leverage its app to avoid Google completely, and yet demand that Google continue to feature Yelp prominently in its search results, particularly on mobile, where the Answer Box has particular utility. I get that Yelp feels like Google has changed the terms of the deal from when Yelp was founded in 2004, but the reality is that the change that truly mattered was mobile.

What I do find compelling is a new video that Yelp put out yesterday; while it makes many of the same points as the one above, instead of being focused on regulators it is targeting Google itself, arguing that Google isn’t living up to its own standards by not featuring the best results, and not driving traffic back to sites that make the content Google needs (by, for example, not including prominent links to the content filling its answer boxes; Yelp isn’t asking that they go away, just that they drive traffic to 3rd parties). Google may be an aggregator, but it still needs supply, which means it needs a sustainable open web. The company should listen.

Facebook and Data Portability

Facebook, unfortunately for its suppliers, faces no such constraints: the content that is truly differentiated is made by Facebook’s users, and it is wholly owned by Facebook. Facebook is even further from the Bill Gates Line than Google is: the latter at least needs commoditized suppliers; the former can take or leave them on a whim, and does.

That is why I’ve come to realize a popular prescription for Facebook’s dominance, data portability, put forward this week by a coalition of progressive organizations under the umbrella Freedom From Facebook, is so mistaken.3 The problem with data portability is that it goes both ways: if you can take your data out of Facebook to other applications, you can do the same thing in the other direction. The question, then, is which entity is likely to have the greater center of gravity with regards to data: Facebook, with its social network, or practically anything else?

Facebook at the center of data exchange
From The Facebook Brand

Remember the conditions that led to Facebook’s rise in the first place: the company was able to circumvent Google, go directly to users, and build a walled garden of data that the search company couldn’t touch. Partnering or interoperating with companies below the Bill Gates Line, particularly aggregators, is simply an invitation to be intermediated. To demand that governments enforce exactly that would be a mistake that only helps Facebook.4


The broader takeaway is that distinguishing between platforms and aggregators isn’t simply an academic exercise: it should affect how companies think about their competitive environment vis-à-vis the biggest companies in tech, and, just as importantly, it should weigh heavily on regulators. The Microsoft antitrust battles of the 2000s were in many respects about enforcing interoperability as a way of breaking into the Microsoft platform; today antitrust should be far more concerned about aggregators capturing everything they touch by virtue of their control of end users.

That’s the thing about the “Generals fight the last war” saying; it’s usually applied to the losing side that made mistake after mistake while the victors leveraged the new world order.

I wrote a follow-up to this article in this Daily Update.

  1. I’ve discussed why I disagree with Gary Reback’s views on monopoly and innovation in this Daily Update [↩︎]
  2. With regard to that FTC decision, yes, as the Wall Street Journal reported, some FTC staff members recommended suing Google; what is not true is that the recommendation was unanimous, or that FTC commissioners ultimately deciding to go in another direction was unusual. In fact, other staff groups in other groups recommended against the suit, and the decision of the FTC commissioners was unanimous. Again, that is not to say it was the right decision, but that the popular conception — including what was reported in that 60 Minutes piece — is a bit off [↩︎]
  3. To be fair, I’ve made the same argument previously, but I’ve changed my mind [↩︎]
  4. The group’s demand that Facebook be forced to divest Instagram, WhatsApp, and Messenger makes much more sense in terms of this framework (with the exception of Messenger, which has always been a part of Facebook). I strongly believe that the single best antitrust remedy for aggregators is limiting acquisitions [↩︎]