The Aggregator Paradox

Which one of these options sounds better?

  • Fast loading web pages with responsive designs that look great on mobile, and ads that are respectful of the user experience
  • The elimination of pop-up ads, ad overlays, and autoplaying videos with sounds

Google is promising both; is the company’s offer too good to be true?

Why Web Pages Suck Redux

2015 may have been the nadir in terms of the user experience of the web, and in Why Web Pages Suck, I pinned the issue on publishers’ broken business model:

If you begin with the premise that web pages need to be free, then the list of stakeholders for most websites is incomplete without the inclusion of advertisers…Advertisers’ strong preference for programmatic advertising is why it’s so problematic to only discuss publishers and users when it comes to the state of ad-supported web pages: if advertisers are only spending money — and a lot of it — on programmatic advertising, then it follows that the only way for publishers to make money is to use programmatic advertising…

The price of efficiency for advertisers is the user experience of the reader. The problem for publishers, though, is that dollars and cents — which come from advertisers — are a far more scarce resource than are page views, leaving publishers with a binary choice: provide a great user experience and go out of business, or muddle along with all of the baggage that relying on advertising networks entails.

My prediction at the time was that Facebook Instant Articles — the Facebook-native format that the social network promised would speed up load times and enhance the reading experience, thus driving more engagement with publisher content — would become increasingly important to publishers:

Arguably the biggest takeaway should be that the chief objection to Facebook’s offer — that publishers are giving up their independence — is a red herring. Publishers are already slaves to the ad networks, and their primary decision at this point is which master — ad networks or Facebook — is preferable?

In fact, the big winner to date has been Google’s Accelerated Mobile Pages (AMP) initiative, which launched later that year with similar goals — faster page loads and a better reading experience. From Recode:

During its developer conference this week, Google announced that 31 million websites are using AMP, up 25 percent since October. Google says these fast-loading mobile webpages keep people from abandoning searches and by extension drive more traffic to websites.

The result is that in the first week of February, Google sent 466 million more pageviews to publishers — nearly 40 percent more — than it did in January 2017. Those pageviews came predominantly from mobile and AMP. Meanwhile, Facebook sent 200 million fewer, or 20 percent less. That’s according to Chartbeat, a publisher analytics company whose clients include the New York Times, CNN, the Washington Post and ESPN. Chartbeat says that the composition of its network didn’t materially change in that time.

This chart doesn’t include Instant Articles specifically, but most accounts suggest the initiative is faltering: the Columbia Journalism Review posited that more than half of Instant Articles’ launch partners had abandoned the format, and Jonah Peretti, the CEO of BuzzFeed, the largest publisher to remain committed to the format, has taken to repeatedly criticizing Facebook for not sharing sufficient revenue with publications committed to the platform.

Aggregation Management

The relative success of Instant Articles versus AMP is a reminder that managing an ecosystem is a different skill that building one. Facebook and Google are both super-aggregators:

Super-Aggregators operate multi-sided markets with at least three sides — users, suppliers, and advertisers — and have zero marginal costs on all of them. The only two examples are Facebook and Google, which in addition to attracting users and suppliers for free, also have self-serve advertising models that generate revenue without corresponding variable costs (other social networks like Twitter and Snapchat rely to a much greater degree on sales-force driven ad sales).

Super-Aggregators are the ultimate rocket ships, and during the ascent ecosystem management is easy: keep the rocket pointed up-and-to-the-right with regards to users and publishers and suppliers will have no choice but to clamor for their own seat on the spaceship.

The problem — and forgive me if I stretch this analogy beyond the breaking point — comes when the oxygen is gone. The implication of Facebook and Google effectively taking all digital ad growth is that publishers increasingly can’t breathe, and while that is neither company’s responsibility on an individual publisher basis, it is a problem in aggregate, as Instant Articles is demonstrating. Specifically, Facebook is losing influence over the future of publishing to Google in particular.

A core idea of Aggregation Theory is that suppliers — in the case of Google and Facebook, that is publishers — commoditize themselves to fit into the modular framework that is their only route to end users owned by the aggregator. Critically, suppliers do so out of their own self-interest; consider the entire SEO industry, in which Google’s suppliers pay consultants to better make their content into the most Google-friendly commodity possible, all in the pursuit of greater revenue and profits.

This is a point that Facebook seems to have missed: the power that comes from directing lots of traffic towards a publisher stems from the revenue that results from said traffic, not the traffic itself. To that end, Facebook’s too-slow rollout of Instant Articles monetization, and continued underinvestment (if not outright indifference) to the Facebook Audience Network (for advertisements everywhere but the uber-profitable News Feed) has left an opening for Google: the search giant responded by iterating AMP far more quickly, not just in terms of formatting but especially monetization.

Critically, that monetization was not limited to Google’s own ad networks: from the beginning AMP has been committed to supporting multiple ad networks, which sidestepped the trap Facebook found itself in. By not taking responsibility for publisher monetization Google made AMP more attractive than Instant Articles, which took responsibility and then failed to deliver.1

I get Facebook’s excuse: News Feed ads are so much more profitable for the company than Facebook Audience Network ads, that from a company perspective it makes more sense to devote the vast majority of the company’s resources to the former; from an ecosystem perspective, though, the neglect of Facebook Audience Network has been a mistake. And that, by extension, is why Google’s approach was so smart: Google has the same incentives as Facebook to focus on its own advertising, but it also has the ecosystem responsibility to ensure the incentives in place for its suppliers pay off. Effectively offloading that payoff to third party networks both ensures publishers get paid even as Google’s own revenue generation is focused on the search results surrounding those AMP articles.

Google’s Sticks

Search, of course, is the far more important reason why AMP is a success: Google prioritizes the format in search results. Indeed, for all of the praise I just heaped on AMP with regards to monetization, AMP CPMs are still significantly lower than traditional mobile web pages; publishers, though, are eager to support the format because a rush of traffic from Google more than makes up for it.

Here too Facebook failed to apply its power as an aggregator: if monetization is a carrot, favoring a particular format is a stick, and Facebook never wielded it. Contrary to expectations the social network never gave Instant Articles higher prominence in the News Feed algorithm, which meant publishers basically had the choice between more-difficult-to-monetize-but-faster-to-load Instant Articles or easier-to-monetize-and-aren’t-our-resources-better-spent-fixing-our-web-page? traditional web pages. Small wonder the latter won out!

In fact, for all of the criticism Facebook has received for its approach to publishers generally and around Instant Articles specifically, it seems likely that the company’s biggest mistake was that it did not leverage its power in the way that Google was more than willing to.

That’s not the only Google stick in the news: the company is also starting to block ads in Chrome. From the Wall Street Journal:

Beginning Thursday, Google Chrome, the world’s most popular web browser, will begin flagging advertising formats that fail to meet standards adopted by the Coalition for Better Ads, a group of advertising, tech and publishing companies, including Google, a unit of Alphabet Inc…

Sites with unacceptable ad formats—annoying ads like pop-ups, auto-playing video ads with sound and flashing animated ads—will receive a warning that they’re in violation of the standards. If they haven’t fixed the problem within 30 days, all of their ads — including ads that are compliant — will be blocked by the browser. That would be a major blow for publishers, many of which rely on advertising revenue.

The decision to curtail junk ads is partly a defensive one for both Google and publishers. Third-party ad blockers are exploding, with as many as 615 million devices world-wide using them, according to some estimates. Many publishers expressed optimism that eliminating annoying ads will reduce the need for third-party ad blockers, raise ad quality and boost the viability of digital advertising.

Nothing quite captures the relationship between suppliers and their aggregator like the expression of optimism that one of the companies actually destroying the viability of digital advertising for publishers will actually save it; then again, that is why Google’s carrots, while perhaps less effective than its sticks, are critical to making an ecosystem work.

Aggregation’s Antitrust Paradox

The problem with Google’s actions should be obvious: the company is leveraging its monopoly in search to push the AMP format, and the company is leveraging its dominant position in browsers to punish sites with bad ads. That seems bad!

And yet, from a user perspective, the options I presented at the beginning — fast loading web pages with responsive designs that look great on mobile and the elimination of pop-up ads, ad overlays, and autoplaying videos with sounds — sounds pretty appealing!

This is the fundamental paradox presented by aggregation-based monopolies: by virtue of gaining users through the provision of a superior user experience, aggregators gain power over suppliers, which come onto the aggregator’s platforms on the aggregator’s terms, resulting in an even better experience for users, resulting in virtuous cycle. There is no better example than Google’s actions with AMP and Chrome ad-blocking: Google is quite explicitly dictating exactly how it is its suppliers will access its customers, and it is hard to argue that the experience is not significantly better because of it.

At the same time, what Google is doing seems nakedly uncompetitive — thus the paradox. The point of antitrust law — both the consumer-centric U.S. interpretation and the European competitor-centric one — is ultimately to protect consumer welfare. What happens when protecting consumer welfare requires acting uncompetitively? Note that implicit in my analysis of Instant Articles above is that Facebook was not ruthless enough!

The Ad Advantage

That Google might be better for users by virtue of acting like a bully isn’t the only way in which aggregators mess with our preconceived assumptions about the world. Consider advertising: many commentators assume that user annoyance with ads will be the downfall of companies like Google and Facebook.

That, though, is far too narrow an understanding of “user experience”; The “user experience” is not simply user interface, but rather the totality of an app or web page. In the case of Google, it has superior search, it is now promising faster web pages and fewer annoying ads, and oh yeah, it is free to use. Yes, consumers are giving up their data, but even there Google has the user experience advantage: consumer data is far safer with Google than it is with random third party ad networks desperate to make their quarterly numbers.

Free matters in another way: in disruption theory integrated incumbents are thought to lose not only because of innovation in modular competing systems, but also because modular systems are cheaper: the ad advantage, though, is that the integrated incumbents — Google and Facebook — are free to end users. That means potential challengers have to have that much more of a superior user experience in every other aspect, because they can’t be cheaper.2

In other words, we can have our cake and eat it too — and it’s free to boot. Hopefully it’s not poisonous.

  1. Instant Articles allows publishers to sell their own ads directly, but explicitly bans third party ad networks []
  2. This, as an aside, is perhaps the biggest advantage of cryptonetworks: I’ve already noted in Tulips, Myths, and Cryptocurrencies that cryptonetworks are “probably the most viable way out from the antitrust trap created by Aggregation Theory”; that was in reference to decentralization, but that there is money to be made is itself an advantage when the competition is free. More on this tomorrow. []

Apple’s Middle Age

Forgive the personal aside, but our family bought some furniture yesterday, and it wasn’t half bad. We’re moving house, and I’m hopeful it will be the last time for a while; given my personal history that is saying something.

By my count this will be my 12th apartment since I graduated from college, and it never made much sense to invest in anything beyond Ikea. Sure, that number is a bit extreme, but from my perspective the optionality that comes from the willingness to move around was worth the packing pain; now that my kids are in school and my career die cast — at least for the time being — the prospect of staying put for more than a year or two comes as a relief.

In other words, I’m hitting middle age, with the change in circumstances and priorities that entails.

iPod on Windows

Apple, at least in human terms, is officially over the hill: the company’s 40th birthday was last April. In truth, though, the first Apple died and was reborn in 1997 with the return of Steve Jobs, at a time when the company was weeks away from bankruptcy.

The cover of the June, 1997 edition of Wired

What happened next is certainly familiar to everyone reading this: after slashing products and re-focusing the company around a dramatically simplified product line, Jobs shepherded the introduction of the iMac and, three years after that, the iPod. Perhaps no decision looms larger, though, than releasing iTunes — the software yin to the iPod’s hardware yang — for Windows. Erstwhile Apple analyst Gene Munster told the San Francisco Chronicle:

For Apple, the Windows version of iTunes is part of a “very slow but real shift” in strategy, said Gene Munster, senior research analyst with U.S. Bancorp Piper Jaffray. “They’ve tried everything to get their installed base to grow, but it just doesn’t grow. What you’re going to see in the coming years is a different Apple.” Ironically, Munster said iTunes for Windows may ultimately help sell more iPods but fewer Macintoshes, because it works well enough with a PC.

In fact, the opposite occurred — at least in the long run. The iPod took off like a rocket, dominating the portable music industry until it was killed by the smartphone, specifically the iPhone. And, over time, more and more satisfied iPod and iPhone customers began considering Macs; macOS devices have outgrown the overall market (which is shrinking) nearly every quarter for years.

That’s a side story though: while the iPod and the first few editions of the iPhone needed a PC, the latter eventually became independent, an effectively full-fledged computer in its own right. Indeed, most consumer electronics devices now presume that the customer has a smartphone, which makes sense: nearly everyone that has a PC has a smartphone, but there are around a billion people who only have the device in their pocket.

And, come Friday, there will be at least one prominent device for sale that requires not just a smartphone but an iOS device specifically: HomePod.

The HomePod Strategy

The strategy around the HomePod, at least from my perspective, is far more fascinating than the device itself; while it does sound great (at least in the controlled press briefing where I heard it), I have an Echo Dot (and a Google Home-controlled Chromecast, for that matter) connected to my living room stereo that sounds better.

To get a full music library with either, though, requires a separate music subscription — Spotify in my case. And while I am the sort of profitable (for the music industry) idiot that pays for effectively the same service twice,1 most people only subscribe to one music service. And, for iPhone users, which service is that likely to be? Unsurprisingly, given its prominence on the device combined with Apple customer loyalty, the answer, at least in the United States, is increasingly Apple Music. From the Wall Street Journal:

Apple Inc.’s streaming-music service, introduced in June 2015, has been adding subscribers in the U.S. more rapidly than its older Swedish rival — a monthly growth rate of 5% versus 2% — according to people in the record business familiar with figures reported by the two services. Assuming that clip continues, Apple will overtake Spotify in the world’s biggest music market this summer. Apple’s music-streaming service has been quietly gaining ground in part thanks to the popularity of the company’s devices: Apple Music comes preloaded on all iPhones, Apple Watches and other hardware the company sells.

That last sentence explains why this isn’t a surprise; my criticism of Apple Music when it launched was not a statement on whether or not the service would be successful, but whether or not it was worth the trouble, particularly in terms of focus. I wrote in a Daily Update later that year:

One interesting angle to [a Taylor Swift exclusive] is that the fact Apple Music exists for Android likely makes it much more palatable for Swift than an Apple-specific service would be. But, that leads to the natural question: what ultimate benefit is Apple deriving from however much they are paying Swift, or from Apple Music as a whole? I know many of you are sick of me asking this question, and, in fact, I am an Apple Music subscriber myself: I find the integration with the Apple Watch to be particularly useful when driving around with two music-loving kids in the backseat of my car. My issue, rather, is about opportunity cost: why can’t Apple architect their platform so that other services can fulfill this low-margin middle-person role, freeing up resources to focus on the sorts of things that only Apple can do?

HomePod is the best answer yet, and I have to admit, I’m pretty impressed by Apple’s foresight.

The Apple Music Bridge

One more important piece of background: CEO Tim Cook and CFO Luca Maestri have been pushing the narrative that Apple is a services company for two years now, starting with the 1Q 2016 earnings call in January, 2016. At that time iPhone growth had barely budged year-over-year (it would fall the following three quarters), and it came across a bit as a diversion; after all, it’s not like the company was changing its business model. I wrote at the time:

As I’ve written innumerable times, services (horizontal) and hardware (vertical) companies have very different strategic priorities: the former ought to maximize their addressable market (by, say, making a cheaper iPhone), while the latter ought to maximize their differentiation. And, Cook’s answer made clear what Apple’s focus remains:

Our strategy is always to make the best products…We have the premium part of our line is the 6s and the 6s Plus. We also have a mid-price point, with the iPhone 6 and the iPhone 6 Plus. And we continue to offer the iPhone 5s in the market and it continues to do quite well. And so we offer all of those and I don’t see us deviating from that approach.

To be clear, I think this is the exact right approach for Apple…But let’s be honest: that means Apple is not a services company; they have a nice services revenue stream, but the company is rightly judged now and for the foreseeable future on the performance of its hardware.

I still think this was the right strategic analysis — Apple’s services differentiate its hardware, as opposed to its hardware existing to push Apple’s services — but it was the wrong financial analysis: Apple’s services may be exclusive to Apple devices,2 but Apple’s install base is so large — 1.3 billion devices, according to Thursday’s 1Q 2018 earnings call — that Services revenue will inevitably rise with a user base that is both growing in terms of numbers and usage, and that is meaningful indeed ($8.5 billion last quarter alone).

Apple Music, though, simply isn’t that meaningful financially, though, no matter how fast it has grown: 36 million at $10/month each is just over $1 billion in revenue a quarter (likely less, given that user number includes folks on family plans); more importantly, actual profit may very well be negative, given that the vast majority of revenue goes to record labels and publishers (as a point of comparison, Spotify is reported to operate in the red). It simply isn’t a part of the Services financial story (which is first and foremost the App Store, followed by Google search payments).

What HomePod shows, though, is that Apple Music is part of the strategy story. Remember, strategically speaking, the point of services is to differentiate hardware. To that end, HomePod is not exclusive to Apple devices to prop up Apple Music; rather, Apple Music is exclusive to HomePod to sell speakers.3 Most commentary has assumed that:

  1. Customer wants HomePod
  2. Therefore, customer subscribe to Apple Music
  3. Apple profits

Again, this doesn’t make sense because Apple Music isn’t profitable!

Instead, I think the order goes like this:

  1. Customer owns an iPhone
  2. Customer subscribes to Apple Music because it is installed by default on their iPhone
  3. As an Apple Music subscriber, customer only has one choice in smart speakers: HomePod (and to make the decision to spend more money palatable, Apple pushes sound quality),4 from which Apple makes a profit

In this view, Apple Music serves as a “bridge” to translate iPhone market share into smart speaker share; services is a means, not an end, which is exactly what we should expect from a company with Apple’s vertical business model.

The Apple Squeeze

This fact — that Apple is a vertical company that makes money by selling hardware at a profit — explains two comments by Cook that stood out on last week’s earnings call.

First was the insistence that analysts evaluate Apple according to iPhones sold per week, not per quarter, the reason being that 1Q 2018 had 13 weeks while 1Q 2017 had 14. That’s fine as far as it goes: Apple sold fewer iPhones last quarter than it did a year ago, but more per week. A year ago, though, Apple was bragging about “all-time unit and revenue records for iPhone”, when in fact the per-week number was lower than 1Q 2016.

Apple’s sudden insistence on per-week numbers is like a company complaining about currency: sure, it matters, but executives only make a big deal out of it when they are trying to divert attention from something else — in this case, stagnant iPhone unit growth.

Why, then, was Apple’s iPhone revenue growth up? Well, when you raise prices and a segment of your customer base will only buy the best, you can achieve higher average selling prices — over $100 higher year-over-year ($796 versus $694) — which means higher revenue.

Charging its best customers more for iPhones wasn’t the only reason Apple’s revenue was higher, though: remember that Apple is making more off of every customer over time via Services. And there is one more piece: Apple is selling its best customers more and more devices.

Devices > Users

This was the second Cook comment that stood out, in response to a question about how many users Apple had (emphasis mine):

We’re not releasing a user number, because we think that the proper way to look at it is to look at active devices. It’s also the one that is the most accurate for us to measure. And so that’s our thinking behind there.

Cook — who repeated the sentiment later in the call — couldn’t have given a more strident example of how every company is best viewed according to the dictates of their business model. If companies are what they measure, then what matters to Apple is the number of devices sold, not the number of users. Indeed, the user is a means to the end of selling a device — and ideally more than one at a time!

Consider nearly every major Apple product announcement of the last decade:

  • iPad: Standalone, but thanks to Apple’s push for unified apps, is immediately rendered more valuable if a customer already owns an iPhone and has bought multiple apps
  • Apple TV: Standalone, but thanks to Apple’s push for unified apps and AirPlay protocol, is immediately rendered more valuable if a customer already owns an iPhone
  • Apple Watch: Only works with an iPhone, which means by definition it can only be sold to an existing Apple customer
  • AirPods: Work with all phones, but better on iPhone, which, conveniently enough, dropped the headphone jack the same time AirPods were announced
  • HomePod: Only works with an iOS device, which means by definition it can only be sold to an existing Apple customer (with Apple Music as a push)

Apple’s growth is almost entirely inwardly-focused when it comes to its user-base: higher prices, more services, and more devices.

Apple’s Middle Age

This is by no means a condemnation of Apple. Every single move I’ve described above is justified by two circumstances in particular.

First, as a general rule, challengers pursue interoperability while incumbents strive for incompatibility. This is Strategy 101: seek to fight battles where you have the greatest advantage. When Apple was making the iPod, its advantage was a superior device; making that device interoperable with Windows let Apple fight the portable music player battle on its terms. Today, though, Apple already has dominant market share: better to make its devices exclusive to its ecosystem, preventing rivals from bringing their own advantage (superior voice assistants, in the case of Alexa and Google Assistant) to bear.

Secondly, the high-end smartphone market — that is, the iPhone market — is saturated. Apple still has the advantage in loyalty, which means switchers will on balance move from Android to iPhone, but that advantage is counter-weighted by clearly elongating upgrade cycles. To that end, if Apple wants growth, its existing customer base is by far the most obvious place to turn.

In short, it just doesn’t make much sense to act like a young person with nothing to lose: one gets older, one’s circumstances and priorities change, and one settles down. It’s all rather inevitable.

Keep in mind, the swashbuckling Apple — the one led by Steve Jobs, not Tim Cook — that looms so large in everyone’s imaginations, couldn’t have had more different circumstance. Jobs was a product and execution genius, but in truth we have no idea how he would deal with the strategy questions facing Cook. Making iTunes for Windows was as correct strategically as is making HomePod exclusive to iOS devices; that the former fits ones’ mental model of how a company “should” operate is a matter of circumstance, not principle.

So it was for every Jobs decision: expanding the iPod market with the Mini wasn’t disrupting itself, it was a means of making more money. An even starker example is the iPhone: cannibalizing oneself is a whole lot less impressive when the cannibalizing product has a higher ASP and higher margins. This is to take nothing away from either decision, simply to note that it’s a lot easier to make decisions everyone loves when the overall market is growing.

The fact of the matter is that Apple under Cook is as strategically sound a company as there is; it makes sense to settle down. What that means for the long run — stability-driven growth, or stagnation — remains to be seen.

  1. My reasoning: Apple Music for the car (Siri integration), and Spotify for everywhere else; Spotify Connect is excellent []
  2. Except for Apple Music on Android []
  3. Sonos does not count! You can’t use voice. In usage it is no different than using another smart speaker via Bluetooth []
  4. You can play other streaming service on HomePod, but only via the increasingly archaic AirPlay protocol; similarly, you can play Apple Music on an Echo or Google Home, but only via the similarly limited Bluetooth protocol []

Amazon Health

It’s pretty rare for the same company to feature in two consecutive Weekly Articles; yesterday’s announcement of a health care initiative involving Amazon, though, is not only incredibly intriguing, it also fits directly into some of the most important themes on Stratechery. I couldn’t resist.

The Announcement

From a joint press release:

Amazon, Berkshire Hathaway and JPMorgan Chase & Co. announced today that they are partnering on ways to address healthcare for their U.S. employees, with the aim of improving employee satisfaction and reducing costs. The three companies, which bring their scale and complementary expertise to this long-term effort, will pursue this objective through an independent company that is free from profit-making incentives and constraints. The initial focus of the new company will be on technology solutions that will provide U.S. employees and their families with simplified, high-quality and transparent healthcare at a reasonable cost.

Tackling the enormous challenges of healthcare and harnessing its full benefits are among the greatest issues facing society today. By bringing together three of the world’s leading organizations into this new and innovative construct, the group hopes to draw on its combined capabilities and resources to take a fresh approach to these critical matters…

The effort announced today is in its early planning stages, with the initial formation of the company jointly spearheaded by Todd Combs, an investment officer of Berkshire Hathaway; Marvelle Sullivan Berchtold, a Managing Director of JPMorgan Chase; and Beth Galetti, a Senior Vice President at Amazon. The longer-term management team, headquarters location and key operational details will be communicated in due course.

I’ve gotten more and more questions from readers about the possibilities of Amazon and health care, even before this announcement. I’ve been surprised, to be honest, but perhaps I shouldn’t be: I was the one who declared on The Bill Simmons Podcast that “Amazon’s goal is to basically take a skim off of all economic activity”, and given that health care was 17.9% of GDP in 2016, well, I guess that means I predicted this!

Amazon Health Marketplace

What is “this”, though? It certainly is tempting to jump immediately to a possible end game predicated on the ideas I have laid out in The Amazon Tax, Amazon’s New Customer, and Amazon Go and the Future:

  • Amazon builds out “interfaces” for its employees (as well as those of Berkshire Hathaway and J.P. Morgan Chase — I’ll just refer to Amazon from here on out), both digital and physical, to access basic healthcare needs; these sit in front of pharmacy benefit managers (PBMs), insurance administrators, wholesale distributors and pharmacies.
  • Amazon starts building out infrastructure for those healthcare suppliers, requiring them to serve Amazon’s employees using a standard interface.

Amazon could then go in one of two directions. First, Amazon could start to backwards integrate into its suppliers’ business; there are hints the company is already exploring pharmaceutical sales, and the Wall Street Journal says the idea was broached. That said, I actually think this is less likely; insurance operates best at more scale, not less: first and foremost, the larger the pool, the more risk can be spread, as well as obvious efficiency gains in administration. More scale also gives more bargaining power over other parts of the healthcare chain. Three companies, large though they may be, aren’t going to be as effective as large insurers, no matter how well-managed they may be.

What would make more sense to me is that, having first built an interface for its employees, and then a standardized infrastructure for its health care suppliers, is that Amazon converts the latter into a marketplace where PBMs, insurance administrators, distributors, and pharmacies have to compete to serve employees. And then, once that marketplace is functioning, Amazon will open the floodgates on the demand side, offering that standard interface to every large employer in America.

Aggregation and Suppliers

This is certainly ambitious enough — basically intermediating U.S. employers and the U.S. healthcare industry — but in fact this only sets the stage for the wholesale disruption of American healthcare. First, Amazon could not only open up its standard interface to other large employers, but small-and-medium sized businesses, and even individuals; in this way the Amazon Health Marketplace could aggregate by far the most demand for healthcare.

Consolidating demand by offering a superior user experience is how aggregators gain power; given the scenario I just sketched out, Aggregation Theory has a prediction about what might happen next:

Once an aggregator has gained some number of end users, suppliers will come onto the aggregator’s platform on the aggregator’s terms, effectively commoditizing and modularizing themselves. Those additional suppliers then make the aggregator more attractive to more users, which in turn draws more suppliers, in a virtuous cycle.

This means that for aggregators, customer acquisition costs decrease over time; marginal customers are attracted to the platform by virtue of the increasing number of suppliers. This further means that aggregators enjoy winner-take-all effects: since the value of an aggregator to end users is continually increasing it is exceedingly difficult for competitors to take away users or win new ones.

The key words there are “commoditize and modularize”, and this is where the option I dismissed above comes into play, but not in the way most think: Amazon doesn’t create an insurance company to compete with other insurance companies (or the other pieces of healthcare infrastructure); rather, Amazon makes it possible — and desirable — for individual health care providers to come onto their platform directly, be that doctors, hospitals, pharmacies, etc.

After all, if Amazon is facilitating the connection to patients, what is the point of having another intermediary? Moreover, by virtue of being the new middleman, Amazon has the unique ability to consolidate patient data in a way that is not only of massive benefit to patients and doctors but also to the application of machine learning.

Of course that leaves the insurance piece, which makes Berkshire Hathaway a useful partner; conveniently, Berkshire Hathaway is not in the health insurance business, but rather the health reinsurance business — that is, they insure the insurers. Or, to put it another way, they don’t provide any of the services that Amazon Health Marketplace might make obsolete, and specialize in the one thing Amazon Health Services would need.

Oh, and this will be really expensive, and take years to get off the ground. It certainly would be helpful to have access to financing and capital markets, which means it would be very helpful to partner with JPMorgan Chase & Company. The skills these three companies bring to bear seems far more relevant than the number of employees (and besides, the company alliance approach to traditional health care has been done).

Is This Happening?

Needless to say, what I just sketched out is extremely ambitious; it is easy to let one’s mind run wild when it comes to a company without a name, a management team, or a location. Moreover, the press release was quite modest in its ambitions; I quoted it above, but here is the relevant piece again:

The three companies, which bring their scale and complementary expertise to this long-term effort, will pursue this objective through an independent company that is free from profit-making incentives and constraints. The initial focus of the new company will be on technology solutions that will provide U.S. employees and their families with simplified, high-quality and transparent healthcare at a reasonable cost.

Ah yes, “technology solutions”. We’ve certainly seen that before, and it hasn’t worked.

That, though, is where the previous line comes in: the scenario that I sketched out above is wildly profitable, to be sure, but only years down the road when demand is fully aggregated and Amazon Health Marketplace is taking a skim off of every transaction; if short-term profit isn’t the goal, long-term goals become much more realistic.

And there it is, in the first sentence: “this long-term effort.” These three companies are clear up-front that this isn’t a one-off effort; there is the commitment to the long-term, and while “technological solutions” seems like a short-term play, I just explained why that is the place the start. Aggregators win with products that are simple, high-quality, and easy to understand — exactly what this press release promised.

Is This Possible?

I’m not a healthcare expert by any means; I know enough to know that the U.S. system is incredibly complex, bedeviled by incentive problems, and tied up in all kinds of messy ways with regulations (mostly justified!).

At the same time, the U.S. healthcare system is inextricably tied up with the post-World War 2 order; indeed, the entire reason employers are so important to the system is because of World War 2 regulations that instituted price controls on wages, incentivizing employers to use benefits as a means of attracting workers (this was further enshrined by making healthcare benefits tax-exempt).

That system, though, is under more duress than ever. I wrote in TV Advertising’s Surprising Strength — and Inevitable Fall:

What should be terrifying to television executives is that all of those pieces that make television advertising the gold mine that it has been are under the exact same threat that TV watching itself is: the threat of the Internet. Start with the top 25 advertisers in the U.S.…

Notice that the vast majority of the industries on on this list are dominated by massive companies that compete on scale and distribution. CPG is the perfect example: building a “house of brands” allows a company like Procter & Gamble to target demographic groups even as they leverage scale to invest in R&D, bring down the cost of products, and most importantly, dominate the distribution channel (i.e. retail shelf space). Said retailers, meanwhile, are huge in their own right, not only so they can match their massive suppliers at the bargaining table but also so they can scale logistics, inventory management, store development, etc. Automobile companies, meanwhile, are not unlike CPG companies: they operate a “house of brands” to serve different demographics while benefitting from scale in production and distribution; the primary difference is that they make money through one large purchase instead of over many smaller purchases over time.

Note [that nearly all] of the companies on this list are threatened by the Internet.

My thesis in that article — repeated in Dollar Shave Club and the Disruption of Everything and The Sports Linchpin — is that the post-World War 2 economic system was deeply intertwined and interdependent, and that the root of everything was control of distribution. The Internet, though, made the distribution of information free, upsetting not just information providers like publishers, but all industries; it follows, then, that to the extent that the current health care system is built on that post-World War 2 order, such is the extent to which it is vulnerable.

That is not to say its collapse is imminent — quite the opposite, in fact. Each seemingly distinct industry, by virtue of being interdependent on others, supports each other. My expectation, then, is not that the Internet methodically disrupts industry after industry in some sort of chronological order, but rather that the entire edifice lasts far longer than technologists think, only to one day collapse far quicker than anyone expected.

The ultimate winners of this shakeout, then, are not only companies that are building businesses predicated on the Internet, but just as importantly, are willing and able to build those businesses with the patience that will be necessary to wait for the old order to collapse, particularly if that collapse happens years or decades after the underlying business models are rotten.

There is no more patient company than Amazon.

Amazon Go and the Future

Amazon Go is the story of technology, and so is this tweet:

Yesterday the Amazon Go concept store in Seattle opened to the public, filled with sandwiches, salads, snacks, various groceries, and even beer and wine (Recode has a great set of pictures here). The trick is that you don’t pay, at least in person: a collection of cameras and sensors pair your selection to your Amazon account — registered at the door via smartphone app — which rather redefines the concept of “grab-and-go.”

The economics of Amazon Go define the tech industry; the strategy, though, is uniquely Amazon’s. Most of all, the implications of Amazon Go explain both the challenges and opportunities faced by society broadly by the rise of tech.

The Economics of Tech

This point is foundational to nearly all of the analysis of Stratechery, which is why it’s worth repeating. To understand the economics of tech companies one must understand the difference between fixed and marginal costs, and for this Amazon Go provides a perfect example.

A cashier — and forgive the bloodless language for what is flesh and blood — is a marginal cost. That is, for a convenience store to sell one more item requires some amount of time on the part of a cashier, and that time costs the convenience store operator money. To sell 100 more items requires even more time — costs increase in line with revenue.

Fixed costs, on the other hand, have no relation to revenue. In the case of convenience stores, rent is a fixed cost; 7-11 has to pay its lease whether it serves 100 customers or serves 1,000 in any given month. Certainly the more it serves the better: that means the store is achieving more “leverage” on its fixed costs.

In the case of Amazon Go specifically, all of those cameras and sensors and smartphone-reading gates are fixed costs as well — two types, in fact. The first is the actual cost of buying and installing the equipment; those costs, like rent, are incurred regardless of how much revenue the store ultimately produces.

Far more extensive, though, are the costs of developing the underlying systems that make Amazon Go even possible. These are R&D costs, and they are different enough from fixed costs like rent and equipment that they typically live in another place on the balance sheet entirely.

These different types of costs affect management decision-making at different levels (that is, there is a spectrum from purely marginal costs to purely fixed costs; it all depends on your time frame):

  • If the marginal cost of selling an individual item is more than the marginal revenue gained from selling the item (i.e. it costs more to pay a cashier to sell an item than the gross profit earned from an item) then the item won’t be sold.
  • If the monthly rent for a convenience store exceeds the monthly gross profit from the store, then the store will be closed.
  • If the cost of renovations and equipment (in the case of small businesses, this cost is usually the monthly repayments on a loan) exceeds the net profit ex-financing, then the owner will go bankrupt.

Keep in mind, most businesses start out in the red: it usually takes financing, often in the form of a loan, to buy everything necessary to even open the business in the first place; a company is not truly profitable until that financing is retired. Of course once everything is paid off a business is not entirely in the clear: physical objects like shelves or refrigeration units or lights break and wear out, and need to be replaced; until that happens, though, money can be made by utilizing what has already been paid for.

This, though, is why the activity that is accounted for in R&D is so important to tech company profitability: while digital infrastructure obviously needs to be maintained, by-and-large the investment reaps dividends far longer than the purchase of any physical good. Amazon Go is a perfect example: the massive expense that went into developing the underlying system powering cashier-less purchasing does not need to be spent again; moreover, unlike shelving or refrigerators, the output of that expense can be duplicated infinitely without incurring any additional cost.

This principle undergirds the fantastic profitability of successful tech companies:

  • It was expensive to develop mainframes, but IBM could reuse the expertise to build them and most importantly the software needed to run them; every new mainframe was more profitable than the last.
  • It was expensive to develop Windows, but Microsoft could reuse the software on all computers; every new computer sold was pure profit.
  • It was expensive to build Google, but search can be extended to anyone with an Internet connection; every new user was an opportunity to show more ads.
  • It was expensive to develop iOS, but the software can be used on billions of iPhones, every one of which generates tremendous profit.
  • It was expensive to build Facebook, but the network can scale to two billion people and counting, all of which can be shown ads.

In every case a huge amount of fixed costs up front is overwhelmed by the ongoing ability to make money at scale; to put it another way, tech companies combine fixed costs with marginal revenue opportunities, such that they make more money on additional customers without any corresponding rise in costs.

This is clearly the goal with Amazon Go: to build out such a complex system for a single store would be foolhardy; Amazon expects the technology to be used broadly, unlocking additional revenue opportunities without any corresponding rise in fixed costs — of developing the software, that is; each new store will still require traditional fixed costs like shelving and refrigeration. That, though, is why this idea is so uniquely Amazonian.

The Strategy of Technology

The most important difference between Amazon and most other tech companies is that the latter generally invest exclusively in research and development — that is, to say, in software. And why not? As I just explained software development has the magical properties of value retention and infinite reproduction. Better to let others handle the less profitable and more risky (at least in the short term) marginal complements. To take the three most prominent examples:

  • Microsoft builds the operating system (and eventually, application software) and leaves the building of computers to OEMs
  • Google builds the search engine and leaves the creation of web pages to be searched to the rest of the world
  • Facebook builds the infrastructure of the network, and leaves the creation of content to be shared to its users

All three companies are, at least in terms of their core businesses, pure software companies, which means the economics of their businesses align with the economics of software: massive fixed costs, and effectively zero marginal costs. And while Microsoft’s market, large though it may have been, was limited by the price of a computer, Google and Facebook, by virtue of their advertising model, are super-aggregators capable of scaling to anyone with an Internet connection. All three also benefit (or benefited) from strong network effects, both on the supply and demand side; these network effects, supercharged by the ability to scale for free, are these companies’ moats.

Apple and IBM, on the other hand, are/were vertical integrators, particularly IBM. In the mainframe era the company built everything from components to operating systems to application software and sold it as a package with a long-term service agreement. By doing so all would-be competitors were foreclosed from IBM’s market; eventually, in a(n unsuccessful) bid to escape antitrust pressure, application software was opened up, but that ended up entrenching IBM further by adding on a network effect. Apple isn’t nearly as integrated as IBM was back in the 60s, but it builds both the software and the finished products on which it runs, foreclosing competitors (while gaining economies of scale from sourcing components and two-sided network effects through the App Store); Apple is also happy to partner with telecoms, which have their own network effects.

Amazon is doing both.

In market after market the company is leveraging software to build horizontal businesses that benefit from network effects: in e-commerce, more buyers lead to more suppliers lead to more buyers. In cloud services, more tenants lead to great economies of scale, not just in terms of servers and data centers but in the leverage gained by adding ever more esoteric features that both meet market needs and create lock-in. As I wrote last year the point of buying Whole Foods was to jump start a similar dynamic in groceries.

At the same time Amazon continues to vertically integrate. The company is making more and more products under its own private labels on one hand, and building out its fulfillment network on the other. The company is rapidly moving up the stack in cloud services, offering not just virtual servers but microservices that obviate the need for server management entirely. And in logistics the company has its own airplanes, trucks, and courier services, and has promised drones, with the clear goal of allowing the company to deliver products entirely on its own.

To be both horizontal and vertical is incredibly difficult: horizontal companies often betray their economic model by trying to differentiate their vertical offerings; vertical companies lose their differentiation by trying to reach everyone. That, though, gives a hint as to how Amazon is building out its juggernaut: economic models — that is, the constraint on horizontal companies going vertical — can be overcome if the priority is not short-term profit maximization.

Amazon’s Triple Play

In 2012 Amazon acquired Kiva Systems for $775 million, the then-second largest acquisition in company history.1 Kiva Systems built robots for fulfillment centers, and many analysts were puzzled by the purchase: Kiva Systems already had a plethora of customers, and Amazon was free to buy their robots for a whole lot less than $775 million. Both points argued against a purchase: continuing to sell to other companies removed the only plausible strategic rationale for buying the company instead of simply buying robots, but to stop selling to Kiva Systems’ existing customers would be value-destructive. It’s one thing to pay 8x revenue, as Amazon did; it’s another to cut off that revenue in the process.

In fact, though, that is exactly what Amazon did. The company had no interest in sharing Kiva Systems’ robots with its competitors, leaving a gap in the market. At the same time the company ramped up its fulfillment center build-out, gobbling up all of Kiva Systems’ capacity. In other words, Amazon made the “wrong” move in the short-term for a long-term benefit: more and better fulfillment centers than any of its competitors — and spent billions of dollars doing so.

This willingness to spend is what truly differentiates Amazon, and the payoffs are tremendous. I mentioned telecom companies in passing above: their economic power flows directly from massive amounts of capital spending; said power is limited by a lack of differentiation. Amazon, though, having started with a software-based horizontal model and network-based differentiation, has not only started to build out its vertical stack but has spent massive amounts of money to do so. That spending is painful in the short-term — which is why most software companies avoid it — but it provides a massive moat.

That is why, contra most of the analysis I have seen, I don’t think Amazon will license out the Amazon Go technology. Make no mistake, that is exactly what a company like Google would do (and as I expect them to do with Waymo), and for good reason: the best way to get the greatest possible return on software R&D is to spread it as far and wide as possible, which means licensing. The best way to build a moat, though, is to actually put in the effort to dig it, i.e. spend the money.

To that end, I suspect that in five to ten years the countries Amazon serves will be blanketed with Amazon Go stores, selling mostly Amazon products, augmented by Amazon fulfillment centers. That is the other point many are missing; yes, the Amazon Go store took pains to show that it still had plenty of workers: shelf stockers, ID checkers, and food preparers, etc.

Workers as seen from a sidewalk window as they assemble sandwiches in an Amazon Go store Monday, Jan. 22, 2018, in Seattle. More than a year after it introduced the concept, Amazon opened its artificial intelligence-powered Amazon Go store in downtown Seattle on Monday. The store on the bottom floor of the company’s Seattle headquarters allows shoppers to scan their smartphone with the Amazon Go app at a turnstile, pick out the items they want and leave. (AP Photo/Elaine Thompson)

Unlike cashiers, though, none of these jobs have to actually be present in the store most of the time. It seems obvious that Amazon Go stores of the future will rarely have employees in store at all: there will be a centralized location for food preparation and a dedicated fleet of shelf stockers. That’s the thing about Amazon: the company isn’t afraid of old-world scale. No, sandwich preparation doesn’t scale infinitely, but it does scale, particularly if you are willing to spend.

Marx Versus Bezos

The political dilemma embedded in this analysis is hardly new: Karl Marx was born 200 years ago. Technology like Amazon Go is the ultimate expression of capital: invest massive amounts of money up front in order to reap effectively free returns at scale. What has fundamentally changed, though, is the role of labor: Marx saw a world where capital subjugated labor for its own return; technologies like Amazon Go have increasingly no need for labor at all.

Some, certainly, see this as a problem: what about all the cashiers? What about all the truck drivers? What about all of the other jobs that will be displaced by automation? Well, I would ask, what about the labor of Marx’s day, the factory workers borne of the industrial revolution that he thought should overthrow the bourgeoisie?

In fact, they are all gone, replaced by automation. And, in the meantime, nearly all of humanity has been lifted out of abject poverty. As Nicholas Kristof wrote in the New York Times:

2017 was probably the very best year in the long history of humanity. A smaller share of the world’s people were hungry, impoverished or illiterate than at any time before. A smaller proportion of children died than ever before. The proportion disfigured by leprosy, blinded by diseases like trachoma or suffering from other ailments also fell…

Every day, the number of people around the world living in extreme poverty (less than about $2 a day) goes down by 217,000, according to calculations by Max Roser, an Oxford University economist who runs a website called Our World in Data. Every day, 325,000 more people gain access to electricity. And 300,000 more gain access to clean drinking water.

I don’t seek to minimize real struggles, much less the real displacement that will come from technologies like Amazon Go writ large. For decades technology helped the industrial world work better; more and more, technology is replacing that world completely, and there will be pain. That, though, is precisely why it is worth remembering that the world is not static: to replace humans is, in the long run, to free humans to create entirely new needs and means to satisfy those needs. It’s what we do, and the faith to believe it will happen again will be the best guide in figuring out how.

As for Amazon, the company’s goal to effectively tax all economic activity continues apace. Surely the company is grateful about the attention Facebook is receiving from the public, even as it builds a monopoly with a triple moat. The lines outside Amazon Go, though, are a reminder of exactly why aggregator monopolies are something entirely new: these companies are dominant because people love them. Regulation may be as elusive as Marx’s revolution.

  1. The original version of this article mistakenly said then-largest; Zappos was acquired for $900 million in 2009 []

Facebook’s Motivations

The trepidation — and inevitable outrage — with which much of the media has greeted Facebook’s latest change to the News Feed algorithm seems rather anticlimactic. Nearly three years ago I wrote in The Facebook Reckoning that any publisher that was not a “destination site” — that is, a site that had a direct connection with readers — had no choice but to go along with Facebook’s Instant Article initiative, even though Facebook could change their mind at any time. A few months later, in Popping the Publishing Bubble, I explained why advertising would coalesce with Google and Facebook; that is indeed what has happened, which is the real problem for publishers. Facebook’s algorithm change simply hastens the inevitable.

The story for media is for all intents and purposes unchanged: success depends on building a direct relationship with readers; monetizing that relationship (likely through subscriptions, but not necessarily); and leveraging Facebook as an acquisition channel for those long-term relationships, not short-term page views. If anything this change will help reader-focused publications: users will be more likely to see links shared by their friends, enhancing the word-of-mouth marketing that is the foundation of reader-centric publications.

What I find far more compelling is the question of Facebook’s motivation. Facebook CEO Mark Zuckerberg wrote on Facebook:

One of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent. We built Facebook to help people stay connected and bring us closer together with the people that matter to us. That’s why we’ve always put friends and family at the core of the experience. Research shows that strengthening our relationships improves our well-being and happiness.

We feel a responsibility to make sure our services aren’t just fun to use, but also good for people’s well-being. So we’ve studied this trend carefully by looking at the academic research and doing our own research with leading experts at universities. The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos — even if they’re entertaining or informative — may not be as good.

Based on this, we’re making a major change to how we build Facebook. I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions. We started making changes in this direction last year, but it will take months for this new focus to make its way through all our products. The first changes you’ll see will be in News Feed, where you can expect to see more from your friends, family and groups. As we roll this out, you’ll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard — it should encourage meaningful interactions between people…

Now, I want to be clear: by making these changes, I expect the time people spend on Facebook and some measures of engagement will go down. But I also expect the time you do spend on Facebook will be more valuable. And if we do the right thing, I believe that will be good for our community and our business over the long term too.

Forgive the longer-than-usual excerpt, but there is a lot here. Zuckerberg:

  • Implicitly admits that time spent on Facebook may not be “well-spent”, and cites research suggesting that many common activities on Facebook may not be good for you
  • Introduces the change as a shift in goals from delivering relevant content (a “perfect personalized newspaper”, as Zuckerberg called it in 2014)
  • Suggests that the time spent on Facebook may decrease due to these changes (sending Facebook’s stock down)

In an interview for the Daily Update, Vice-President of News Feed Adam Mosseri argued that this would benefit Facebook in the long run:

This change is primarily focused on doing right by our community, because we actually believe that by doing right by the community in the long run will be good for the business and so we just try to take a long term approach to any question like this.

I absolutely believe the last part of that quote: Facebook is taking a long-term view, and it would only make this change were it right for the business. I’m just not entirely convinced that Zuckerberg and Mosseri are telling us the entire story.

Facebook’s Believability

Start with Zuckerberg’s claim that this change will reduce “the time people spend on Facebook and some measures of engagement.” Mosseri said that would be mostly due to less time spent watching video, given that video content would likely be hurt by this algorithmic change.

That in and of itself is certainly interesting; Zuckerberg has been pushing the importance of video on earnings calls for some time now, and no wonder: TV advertising money remains the proverbial gold-at-the-end-of-the-rainbow for all advertising-based tech companies. Is Facebook giving up on its leprechaun dreams?

I don’t think so, and not just because forgoing all of that potential revenue would be quite unbelievable. Instead, I think the answer was laid out by Zuckerberg during Facebook’s Q1 2017 earnings call while answering a question about Facebook’s new video tab:

For the video tab, the goal that we have for the product experience is to make it so that when people want to watch videos or they want to keep up to date on what’s going on with their favorite show or what’s going on with the public figure that they want to follow, that they can come to Facebook and go to a place knowing that that’s going to show them all the content that they’re interested in.

So that’s a pretty different intent than how people come to Facebook today. Today, for the most part, people pull Facebook out when they have a few minutes, when they want to catch up and see what’s going on in the world with their friends and in the news and everything that’s going on. That’s very different from saying, hey, I want to watch video content now. And that’s what I think we’re going to unlock with this tab.

My takeaway at the time was that Facebook was effectively building two video products: one for content people wanted to watch (the video tab), and the other for content people watched because it was stuck in front of them (News Feed video).

I think that was right, but it also follows that the former would be easier to monetize: after all, people are more likely to put up with an advertisement for a video they want to watch, as opposed to one they are watching because it happened to be presented to them. Indeed, the latter could be actively harmful, reminding people to simply close the app. To that end, reducing the time users spend watching videos that Facebook would never monetize effectively doesn’t seem like a particularly large loss.

That’s not the only reason why it is hard to take Facebook seriously when it comes to proclamations of doom-and-gloom. Back in 2016, on the 3Q 2016 earnings call, Facebook CFO Dave Wehner said that Facebook would soon stop growing the ad load on News Feed and that advertising growth would “come down meaningfully.”

I wondered at the time if this meant Facebook’s ads were less differentiated — and thus had less pricing power in the face of increasing scarcity — than I expected. In fact, my initial analysis was spot-on: as I have been documenting in the Daily Update over the last year, Facebook’s price-per-ad has been increasing as ad impression growth has declined over the last year, strongly suggesting that Facebook has pricing power:

So excuse me if I take Facebook’s pronunciations about the harm its business will soon befall with a rather large grain of salt. The company has already demonstrated it has pricing power such that its advertising revenue can continue to grow strongly even as the number of ads-per-user plateaus; moreover, that power further complicates any attempt to understand Facebook’s motivation.

Facebook’s Imperviousness

The key thing to remember about Facebook — and Google’s — dominance in digital ads is that their advantages are multi-faceted. First and foremost are the attractiveness of their products to users; that attractiveness is rooted not only in technology but also in both data and people-based network effects. Second is the depth of information both companies have on their users, allowing advertisers to spend more efficiently on their platforms — particularly on mobile — than elsewhere. The third advantage, though, is perhaps the least appreciated: buying ads on Google and Facebook is just so much easier. They are one-stop shops for reaching anyone, which means competitors need to not have similar targeting capabilities and user engagement, but in fact need to be significantly better to justify the effort.

These structural advantages lend credibility to Facebook’s contention that it is making these changes with its users’ best interests in mind. After all, it ultimately won’t matter to the bottom line. Indeed, note that Zuckerberg made no mention of these changes impacting revenue, as he surely should have were this change to have a negative impact; in contrast, Zuckerberg warned that hiring new content moderators would impact profitability on the last earnings call.

Of course one hesitates to give Facebook too much credit if this were the case: it would be a clear example of a Strategy Credit, where doing the right thing is easy because it doesn’t actually hurt the underlying business. That, though, may be reassuring in the short term, but it points to still more possible Facebook motivations.

Facebook’s Threats

For about as long as Facebook has been a going concern, the conventional wisdom about their downfall has remained largely the same: some other social network is going to come along, probably amongst young people, and take all of the attention away from Facebook. In fact, as I argued last year in Facebook, Phones, and Phonebooks, the social sphere has room for many players — including networks that garner huge amounts of attention — but that Facebook’s position was secure.

It is increasingly clear that there are two types of social apps: one is the phone book, and one is the phone. The phone book is incredibly valuable: it connects you to anyone, whether they be a personal friend, an acquaintance, or a business. The social phone book, though, goes much further: it allows the creation of ad hoc groups for an event or network, it is continually updated with the status of anyone you may know or wish to know, and it even provides an unlimited supply of entertaining professionally produced content whenever you feel the slightest bit bored.

The phone, on the other hand, is personal: it is about communication between you and someone you purposely reach out to. True, telemarketing calls can happen, but they are annoying and often dismissed. The phone is simply about the conversation that is happening right now, one that will be gone the moment you hang up.

In the U.S. the phone book is Facebook and the phone is Snapchat; in Taiwan, where I live, the phone book is Facebook and the phone is LINE. Japan and Thailand are the same, with a dash of Twitter in the former. In China WeChat handles it all, while Kakao is the phone in South Korea. For much of the rest of the world the phone is WhatsApp, but for everywhere but China the phone book is Facebook.

This isn’t a bad thing; indeed, it is an incredibly valuable thing: Facebook’s status as a utility is exactly what makes the company so valuable. It has the data to target advertising and the feed in which to place it, and it is difficult to imagine any of the phone companies overtaking it in value.

Make no mistake, in this analogy the phone book is where the money is at: Snapchat and Twitter are all struggling to monetize in large part because phones simply aren’t conducive to advertising.1 That, though, makes Facebook’s new focus even more interesting: if advertising struggles to find a place when users are more actively engaged (versus passively consuming content), why is Facebook seemingly going in the opposite direction?

One possible answer is that conventional wisdom is right: Facebook may still have a hold on identity, but the amount of time users — particularly the most valuable users — are spending on the network is steadily decreasing.2 That may not be a problem for the business today, but it certainly could be in the long run.

Another possible answer is that Facebook fears regulation, and by demonstrating the ability to self-correct and focus on what makes Facebook unique the company can avoid regulatory issues completely. The question, though, is how exactly would Facebook be regulated? There certainly is no crime in providing a free service that lets people connect with those they know. I suggested last year that perhaps Facebook’s monopoly power could be seen in its seeming inability to help publishers monetize or especially in digital ads, but those cases are far more theoretical (or in the case of publishers, fantastical) for now.

Perhaps there is a third motivation though: call it “enlightened self-interest.” Keep in mind from whence Facebook’s power flows: controlling demand. Facebook is a super-aggregator, which means it leverages its direct relationship with users, zero marginal costs to serve those users, and network effects, to steadily decrease acquisition costs and scale infinitely in a virtuous cycle that gives the company power over both supply (publishers) and advertisers.

It follows that Facebook’s ultimate threat can never come from publishers or advertisers, but rather demand — that is, users. The real danger, though, is not from users also using competing social networks (although Facebook has always been paranoid about exactly that); that is not enough to break the virtuous cycle. Rather, the only thing that could undo Facebook’s power is users actively rejecting the app. And, I suspect, the only way users would do that en masse would be if it became accepted fact that Facebook is actively bad for you — the online equivalent of smoking.

This is why I find Facebook’s focus on what is good for users to be so fascinating. On one level, maybe the company is, as they can afford to be, simply altruistic. On another, perhaps they are diverting attention from problematic trends in user engagement. Or perhaps they are seeking to neutralize their biggest threat by addressing it head-on.


I don’t know which of these motivations are correct — probably there is truth in all of them — which is precisely why I find this announcement so fascinating. This change could have been made and justified without even broaching the idea that Facebook might be bad for you; why did Facebook rest everything on that reasoning?

It certainly is hard to escape the election of President Trump. I have argued regularly that I don’t believe that fake news was a causal factor in Trump’s election, and I think that Facebook has been a convenient scapegoat for many.

On the other hand, I made the case back in the primaries that Facebook’s decimation of the media led to a correlated decimation of the parties’ ability to control the presidential candidate selection process, creating the conditions for a candidate like Trump to arise. In other words, I do blame Facebook for Trump, but for structural reasons, not causal ones. And even then, Facebook is a stand-in for the Internet’s effect broadly: were it not Facebook ruining media’s business model, it would have been some other company.

Zuckerberg, though, has always seemed to tilt towards the more utopian side of the spectrum when it comes to the Silicon Valley cliche of “changing the world.” The ardent belief that sharing and connecting will fix everything has been a fixture in Zuckerberg’s public comments ever since he emerged into the public sphere, and the CEO effectively declared at the 2016 F8 conference that Trump was in opposition to that.

In that light dismissing Facebook’s change as a mere strategy credit is perhaps to give short shrift to Zuckerberg’s genuine desire to leverage Facebook’s power to make the world a better place. Zuckerberg argued in his 2017 manifesto Building Global Community:

Progress now requires humanity coming together not just as cities or nations, but also as a global community. This is especially important right now. Facebook stands for bringing us closer together and building a global community. When we began, this idea was not controversial. Every year, the world got more connected and this was seen as a positive trend. Yet now, across the world there are people left behind by globalization, and movements for withdrawing from global connection. There are questions about whether we can make a global community that works for everyone, and whether the path ahead is to connect more or reverse course.

Our job at Facebook is to help people make the greatest positive impact while mitigating areas where technology and social media can contribute to divisiveness and isolation. Facebook is a work in progress, and we are dedicated to learning and improving. We take our responsibility seriously.

That, though, leaves the question I raised in response to that manifesto:

Even if Zuckerberg is right, is there anyone who believes that a private company run by an unaccountable all-powerful person that tracks your every move for the purpose of selling advertising is the best possible form said global governance should take?

My deep-rooted suspicion of Zuckerberg’s manifesto has nothing to do with Facebook or Zuckerberg; I suspect that we agree on more political goals than not. Rather, my discomfort arises from my strong belief that centralized power is both inefficient and dangerous: no one person, or company, can figure out optimal solutions for everyone on their own, and history is riddled with examples of central planners ostensibly acting with the best of intentions — at least in their own minds — resulting in the most horrific of consequences; those consequences sometimes take the form of overt costs, both economic and humanitarian, and sometimes those costs are foregone opportunities and innovations. Usually it’s both.

Facebook’s stated reasoning for this change only heightens these contradictions: if indeed Facebook as-is harms some users, fixing that is a good thing. And yet the same criticism becomes even more urgent: should the personal welfare of 2 billion people be Mark Zuckerberg’s personal responsibility?

  1. LINE actually monetizes decently well on a per-user basis, but has struggled to grow outside of its core markets []
  2. This has been rumored for a long time, but it is very difficult to pick out facts from motivated guessing when it comes to Facebook scuttlebutt []

Meltdown, Spectre, and the State of Technology

You’ve heard the adage “It’s all 1s and 0s”, but that’s not a figure of speech: the transistor, the fundamental building block of computers, is simply a switch that is either on (“1”) or off (“0”). It turns out, though, as Chris Dixon chronicled in a wonderful essay entitled How Aristotle Created the Computer, that 1s and 0s, through the combination of mathematical logic and transistors, are all you need:

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century.

Dixon’s essay — which I’ve linked to previously — is well worth a read, but the relevant point for this article is perhaps a surprising one: computers are really stupid; what makes them useful is that they are stupid really quickly.

The Problem with Processor Vulnerabilities

Last week the technology world was shaken by the disclosure of two vulnerabilities in modern processors: Meltdown and Spectre. The announcement was a bit haphazard, thanks to the fact that the disclosure date was moved up by a week due to widespread speculation about the nature of the vulnerability (probably driven by updates to the Linux kernel), but also because Meltdown and Spectre are similar in some respects, but different in others.

Start with the similarities: the outcome for both vulnerabilities is the same — a non-privileged user can access information on the computer they should not be able to, like secret keys or passwords or any other type of data owned by other users. This is a particularly big problem for cloud services like AWS, where multiple “tenants” use the same physical hardware:

This multi-tenant architecture is achieved through the use of virtual machines: there is specialized software that runs on a single physical computer that allows each individual user to operate as if they have their own computer, when in fact they are sharing. This is a win-win: single-user computers sit idle the vast majority of the time (they are stupid really quickly), but if multiple users can use one computer then the hardware can be utilized far more efficiently. And, in the case of cloud services, that same concept can be scaled up to millions of physical computers sharing even more fundamental infrastructure like cooling, networking, administration, etc.

The entire edifice, though, is predicated on a fundamental assumption: that users in one virtual machine cannot access data from another. That assumption, by extension, relies on trust in the integrity of the virtual machine software, which relies on trust in the integrity of the underlying operating system, which ultimately relies on trust in the processor at the heart of a server. From the Meltdown white paper (emphasis mine):

To load data from the main memory into a register, the data in the main memory is referenced using a virtual address. In parallel to translating a virtual address into a physical address, the CPU also checks the permission bits of the virtual address, i.e., whether this virtual address is user accessible or only accessible by the kernel. As already discussed in Section 2.2, this hardware-based isolation through a permission bit is considered secure and recommended by the hardware vendors. Hence, modern operating systems always map the entire kernel into the virtual address space of every user process. As a consequence, all kernel addresses lead to a valid physical address when translating them, and the CPU can access the content of such addresses. The only difference to accessing a user space address is that the CPU raises an exception as the current permission level does not allow to access such an address. Hence, the user space cannot simply read the contents of such an address.

The kernel is the core part of the operating system that should be inaccessible by normal users; it has its own memory to store not only core system data but also data from all of the users (for example, when it has to be written to or read from permanent storage). Even here, though, the system relies on virtualization — that memory is the same physical memory users utilize for their applications. It is up to the CPU to keep track of what parts of memory belong to whom, and this is where the vulnerabilities come in.

Speculative Execution

I just referenced three critical parts of a computer: the processor, memory, and permanent storage. In fact, the architecture for storing data is even more complex than that:

  • Registers are the fastest form of memory, accessible every single clock cycle (that is, a 2.0 GHz processor can access registers two billion times a second). They are also the smallest, usually only containing the inputs and outputs for the current calculation.
  • There are then various levels of cache (L1, L2, etc.) that are increasingly slower and, on the flipside, increasingly larger and less expensive. This cache is located in a hierarchy: data that is needed immediately will be moved from the registers to L1 cache, for example; slightly less necessary data will be in L2, then L3, etc.
  • The next major part of the memory hierarchy is main memory, that is system RAM. While the amount of cache is dependent on the processor model, the amount of memory is up to the overall system builder. This memory is massively slower than cache, but it is also massively larger and far less expensive.
  • The last part of the memory hierarchy, at least on a single computer, is permanent storage — the hard drive. Solid-state drives (SSDs) have made a huge difference in speed here, but even then permanent memory is massively slower than main memory, with the same tradeoffs: you can have a lot more of it at a much lower price.
  • While not part of the traditional memory hierarchy, cloud applications often have permanent memory on a separate physical server on the same network; the usual tradeoffs apply — very slow access in exchange for other benefits, in this case keeping data separate from its application.

To be sure, “very slow” is all relative — we are talking about nanoseconds here. This post by Jeff Atwood puts it in human terms:

That infinite space “between” what we humans feel as time is where computers spend all their time. It’s an entirely different timescale. The book Systems Performance: Enterprise and the Cloud has a great table that illustrates just how enormous these time differentials are. Just translate computer time into arbitrary seconds:

1 CPU cycle 0.3 ns 1 s
Level 1 cache access 0.9 ns 3 s
Level 2 cache access 2.8 ns 9 s
Level 3 cache access 12.9 ns 43 s
Main memory access 120 ns 6 min
Solid-state disk I/O 50-150 μs 2-6 days
Rotational disk I/O 1-10 ms 1-12 months
Internet: SF to NYC 40 ms 4 years
Internet: SF to UK 81 ms 8 years
Internet: SF to Australia 183 ms 19 years
OS virtualization reboot 4 s 423 years
SCSI command time-out 30 s 3000 years
Hardware virtualization reboot 40 s 4000 years
Physical system reboot 5 m 32 millenia

[…]

The late, great Jim Gray…also had an interesting way of explaining this. If the CPU registers are how long it takes you to fetch data from your brain, then going to disk is the equivalent of fetching data from Pluto.

Gray presented this slide while at Microsoft, to give context to that that “Olympia, Washington” reference. Let me extend his analogy:

Suppose you were a college student interning for the summer at Microsoft in Redmond, and you were packing clothes at home in Olympia. Now Seattle summers can be quite finicky — it could be blustery and rainy, or hot and sunny. It’s often hard to know what the weather will be like until the morning of. To that end, the prudent course of action would not be to pack only one set of clothes, but rather to pack clothes for either possibility. After all, it is far faster to change clothes from a suitcase than it is to drive home to Olympia every time the weather changes.

This is where the analogy starts to fall apart: what modern processors do to alleviate the time it takes to fetch data is not only fetch more data than they might need, but actually do calculations on that data ahead-of-time. This is known as speculative execution, and it is the heart of these vulnerabilities. To put this analogy in algorithmic form:

  • Check the weather (execute multiple sub-processes that trigger sensors, relay data, etc.)
    • If the weather is sunny, wear shorts-and-t-shirt
    • Else wear jeans-and-sweatshirt

Remember, computers are stupid, but they are stupid fast: executing “wear shorts-and-t-shirt” or “wear jeans-and-sweatshirt” takes nanoseconds — what takes time is waiting for the weather observation. So to save time the processor will get you dressed before it knows the weather, usually based on history — what was the weather the last several days? That means you can decide on footwear, accessories, etc., all while waiting for the weather observation. That’s the other thing about processors: they can do a lot of things at the same time. To that end the fastest possible way to get something done is to guess what the final outcome will be and backtrack if necessary.

Meltdown

Now, imagine the algorithm was changed to the following:

  • Check your manager’s calendar to see if they will be in the office
    • If they will be in the office, wear slacks and collared-shirt
    • If they will not be in the office, wear shorts-and-t-shirt

There’s just one problem: you’re not supposed to have access to your manager’s calendar. Keep in mind that computers are stupid: the processor doesn’t know this implicitly, it has to actually check if you have access. So in practice this algorithm is more like this:

  • Check your manager’s calendar to see if they will be in the office
    • Check if this intern has access to their manager’s calendar
      • If the intern has access, access the calendar
        • If they will be in the office, wear slacks and collared-shirt
        • If they will not be in the office, wear shorts-and-t-shirt
      • If the intern does not have access, stop getting dressed

Remember, though, computers are very good at doing lots of things at once, and not very good at looking up data; in this case the processor will, under certain conditions, look at the manager’s calendar and decide what to wear before it knows whether or not it should look at the calendar. If it later realizes it shouldn’t have access to the calendar it will undo everything, but the clothes might end up slightly disheveled, which means you might be able to back out the answer you weren’t supposed to know.

I already said that the analogy was falling apart; it is now in complete tatters but this, in broad-strokes, is Meltdown: the processor will speculatively fetch and execute privileged data before it knows if it should or not; that process, though, leaves traces in cache, and those traces can be captured by a non-privileged user.

Explaining Spectre

Spectre is even more devious, but harder to pull off: remember, multiple users are using the same processor — roommates, if you will. Suppose I pack my suitcase the same as you, and then I “train” the processor to always expect sunny days (perhaps I run a simulation program and make every day sunny). The processor will start choosing shorts-and-t-shirt ahead of time. Then, when you wake up, the processor will have already chosen shorts-and-t-shirt; if it is actually rainy, it will put the shorts-and-t-shirt back, but ever-so-slightly disheveled.

This analogy has gone from tatters to total disintegration — it really doesn’t work here. Your data isn’t simply retrieved from main memory speculatively, it is momentarily parked in cache while the processor follows the wrong branch; it is quickly removed once the processor fixes it error, but I can still figure out what data was there — which means I’ve now stolen your data.

Meltdown is easier to explain because — Intel’s protestation to the contrary (Meltdown also affects Apple’s processors) — it is due to a design flaw. The processor is responsible for checking if data can be accessed, and to check too slowly, such that the data can be stolen, is a bug. That is also why Meltdown can be worked around in software (basically, there will be an extra step checking permissions before using the data, which is why the patch causes a performance hit).

Spectre is something else entirely: this is the processor acting as designed. Computers do basic calculations unfathomably quickly, but take forever to get the data to make those calculations: therefore doing calculations without waiting for bottlenecks, based on best guesses, is the best possible way to leverage this fundamental imbalance. Most of the time you will get results far more quickly, and if you guess wrong you are no slower than you would have been had you done everything in order.

This, too, is why Spectre affects all processors: the speed gains from leveraging modern processors’ parallelism and execution speed are so massive that speculative execution is an obvious choice; that the branch predictor might be trained by another user such that cache changes could be tracked simply didn’t occur to anyone until the last year (that we know of).

And, by extension, Spectre can’t be fixed by software: specific implementations can be blocked, but the vulnerability is built-in. New processors will need to be designed, but the billions of processors in use aren’t going anywhere. We’re going to have to muddle through.

Spectre and the State of Technology

I ended 2017 without my customary “State of Technology” post, and just as well: Spectre is a far better representation than anything I might have written. Faced with a fundamental imbalance (data fetch slowness versus execution speed), processor engineers devised an ingenious system optimized for performance, but having failed to envision the possibility of bad actors abusing the system, everyone was left vulnerable.

The analogy is obvious: faced with a fundamental imbalance (the difficulty of gaining and retaining users versus the ease of rapid iteration and optimization), Internet companies devised ingenious systems optimized for engagement, but having failed to envision the possibility of bad actors abusing the system, everyone was left vulnerable.

Spectre, though, helps illustrate why these issues are so vexing:

  • I don’t believe anyone intended to create this vulnerability
  • The vulnerability might be worth it — the gains from faster processors have been absolutely massive!
  • Regardless, decisions made in the past are in the past: the best we can do is muddle through

So it is with the effects of Facebook, Google/YouTube, etc., and the Internet broadly. Power comes from giving people what they want — hardly a bad motivation! — and the benefits still may — probably? — outweigh the downsides. Regardless, our only choice is to move forward.

The 2017 Stratechery Year in Review

The slogan for Stratechery’s sister podcast, Exponent, is “Tech and Society”; never has that felt more appropriate than 2017. This year I wrote 136 Daily Updates (including tomorrow) and 46 Weekly Articles, and, as per tradition, today I summarize the most popular and most important posts of the year: tech and society figure prominently.

You can find previous years here: 2016 | 2015 | 2014 | 2013

Here is the 2017 list.

The Five Most-Viewed Articles

Stratechery not only had record traffic in 2017, but year-over-year growth was also the highest ever; unsurprisingly, the first three articles were in Stratechery’s all-time top five in terms of traffic (Amazon’s New Customer was number one by a long-shot), and the other two in the top twelve.

  1. Amazon’s New Customer — The key to understanding Amazon’s purchase of Whole Foods is to understand that Amazon didn’t buy a retailer: the company bought a customer.
  2. Microsoft’s Monopoly Hangover — There are striking similarities between Microsoft today and IBM in the Lou Gerstner era, but today’s IBM should be a warning to Redmond.
  3. Alexa: Amazon’s Operating System — Money is made at chokepoints, and the most valuable chokepoints are operating systems; Amazon has built exactly that with Alexa.
  4. Facebook and the Cost of Monopoly — Facebook gave one of the worst keynotes in a long time: there was no vision, just the adoption of Snap’s. It’s the inevitable outcome of a monopoly.
  5. The Uber Dilemma — Benchmark’s lawsuit against Uber is extraordinary; that is because Uber, despite everything, remains an extraordinary company. Game theory explains the implications.

Five Big Ideas

These five posts very much capture the biggest themes of 2017: the power of aggregators, one of the most important acquisitions in media history, Bitcoin, sexual harassment, and Apple and China.

  • Defining Aggregators — Building on Aggregation Theory, this provides a precise definition of the characteristics of aggregators, and a classification system based on suppliers. Plus, how to think about aggregator regulation.
  • Disney and Fox — Disney’s rumored acquisition of 21st Century Fox is all about competing with Netflix; whether or not that is a good thing depends on your frame of reference. (See also, Disney’s Choice)
  • Tulips, Myths, and Cryptocurrencies — Did you hear the one about the tulip bubble? It’s almost certainly a myth. It is myths, though, that explain why cryptocurrencies are here to stay.
  • Goodbye Gatekeepers — Harvey Weinstein was a gate-keeper — a position that existed in multiple industries, including the media. That entire structure, though, is untenable on the Internet, and that’s a good thing.
  • Apple’s China Problem — Apple had mixed earnings: most of the world was great, but China was bad again. The reason is that in China WeChat matters more than iOS.

Five Posts About Technology and Society

The biggest theme of all, though, at least in technology, was the impact technology is having on society broadly, and how society might respond. At the center of everything was Facebook.

  • Manifestos and Monopolies — Facebook has long had too much power, but Mark Zuckerberg’s expressed willingness to use said power for political ends means it’s time to consider countermeasures (See also, the afore-linked Facebook and the Cost of Monopoly)
  • Tech Goes to Washington — Facebook, Google, and Twitter testified before a Senate committee: it provided evidence of how tech prefers power over decentralization, even if it means regulation.
  • Why Facebook Shouldn’t Be Allowed to Buy tbh — Facebook is acquiring tbh, another burgeoning social network; regulators erred in allowing the Instagram and WhatsApp acquisitions, but there is no better place to start enforcing the law than now.
  • Inspired Media — The key for products and politicians used to be maximizing paid and earned media. What matters on the Internet, though, is inspired media.
  • The Local News Business Model — Subscriptions are the future of local news: the key, though, is getting rid of newspapers (See also, Faceless Publishers)

See also: The Great Unbundling and the talk I gave at Recode Media:

The Year in Daily Updates

I am incredibly proud of the Daily Update this year: in my very biased opinion, in 2017 the depth and importance of Daily Updates easily matched Weekly Articles. Increasingly, Daily Updates were organized around a single company or topic, and often previewed themes that were later expounded on in Weekly Articles. Here are some of my favorites:

I also conducted four interviews for The Daily Update:

Finally, while I’m not sure if this will become an annual thing or not, for the first time I christened Tech’s Person of the Year: Susan Fowler. In addition, I have now made this Daily Update free to read — Fowler’s impact was extraordinary.


I can’t say it enough: I am so grateful to Stratechery’s readers and especially subscribers for making all of these posts possible. I wish all of you a Merry Christmas and Happy New Year, and I’m looking forward to a great 2018!

Disney and Fox

It’s always a risk writing about a deal before it is official: CNBC reported a month ago that Disney was in talks to acquire many of 21st Century Fox’s assets, including its eponymous movie studio, TV production company, cable channels, and international assets (but not the Fox broadcast network, Fox News, FS1 — Fox’s sports channel — and Fox Business). All was quiet until last week, when CNBC again reported that the deal was on, and now included 21st Century Fox’s Regional Sports Networks.

As I write this it is now widely reported that the deal is imminent; most notably, Comcast has dropped out of the bidding, which means the only question is whether or not Disney can close the deal: they would be crazy not to.

The Logic of Acquisition

The standard reason given for most acquisitions is so-called “synergy”: the idea that the two firms together can generate more revenue with lower costs than they could independently; most managers point towards the second half of that equation, promising investors significant cuts through reducing the number of workers doing the same thing. Certainly that is an argument in Disney’s favor: nearly everything 21st Century Fox does Disney does as well.

Still, it’s not exactly a convincing argument; acquisitions also incur significant costs: the price of the acquired asset includes a premium that usually more than covers whatever cost savings might result, and there are significant additional costs that come from integrating two different companies. Absent additional justification, the cost-savings argument comes across as justification for management empire-building, not value creation.

That’s not always the reason though: the cost-savings argument is often a fig-leaf for an acquisition that reduces competition; better for management to claim synergies in costs than synergies that result in cornering a market. The result is managers who routinely make weak arguments in public and strong arguments in the boardroom.

The best sort of acquisitions, though, are best described by the famous Wayne Gretzky admonition, “Skate to where the puck is going, not where it has been”; these are acquisitions that don’t necessarily make perfect sense in the present but place the acquirer in a far better position going forward: think Google and YouTube, Facebook and Instagram, or Disney’s own acquisition of Capital Cities (which included ESPN).

What makes this potential acquisition so intriguing is that it is a mixture of all three — and which of the three you pick depends on the time frame within which you view the deal.

The Not-so-distant Past

Go back to that Capital Cities acquisition: the 1995 deal was, at the time, the second largest acquisition ever, and it primed Disney to dominate the burgeoning cable television era.

I sketched out the structure of cable TV earlier this year in The Great Unbundling:

The key takeaway should be a familiar one: economic power came from controlling distribution. The cable line was the only way for consumers to obtain the fullest possible array of in-home entertainment, which meant that distributors were able to charge consumers as much as they could bear.

The real negotiations in this value chain took place between content producers and distributors, which ultimately determined who made the most profit, and here Capital Cities + Disney was a powerful combination:

  • First and foremost, ESPN was well on the way to establishing itself as the most indispensable cable channel amongst consumers, allowing it to command carriage fees that were multiples higher than any other channel.
  • Secondly, Disney was able feed content to ABC just in time for the FCC having loosened regulations on broadcast networks producing their own content (instead of acquiring it).
  • Third, Disney built a bundle within the bundle: distributors had to not only pay for ESPN, but also for the Disney channel, A&E, Lifetime, and the host of spinoff channels that followed; any proper accounting for ESPN’s ultimate contribution to Disney’s bottom line should include the above-average carriage fees charged by all of Disney’s properties.

This all seems obvious in retrospect, but at the time most of the attention was on ABC, then the most-profitable broadcast channel. The puck, though, was already moving towards cable.

The Fading Present

The dominant story in media over the last few years has been the slow-but-steady breakdown of that cable TV model. In August 2015, now-Disney CEO Bob Iger — who joined the company in that Capital Cities deal — admitted on an earnings call that ESPN and the company’s other networks, which had previously generated 45% of Disney’s revenue and 68% of its profit, were losing customers:

We are realists about the business and about the impact technology has had on how product is distributed, marketed and consumed. We are also quite mindful of potential trends among younger audiences, in particular many of whom consume television in very different ways than the generations before them. Economics have also played a part in change and both cost and value are under a consumer microscope. All of this has and will continue to put pressure on the multichannel ecosystem, which has seen a decline in overall households as well as growth in so-called skinny or cable light packages.

In fact, as I detailed earlier this year, Disney had not been realistic at all about the “impact technology [had] had on how product is distributed, marketed and consumed”:

Back in 2012 the media company signed a deal with Netflix to stream many of the media conglomerate’s most popular titles…Iger’s excitement was straight out of the cable playbook: as long as Disney produced differentiated content, it could depend on distributors to do the hard work of getting customers to pay for it. That there was a new distributor in town with a new delivery method only mattered to Disney insomuch as it was another opportunity to monetize its content.

The problem now is obvious: Netflix wasn’t simply a customer for Disney’s content, the company was also a competitor for Disney’s far more important and lucrative customer — cable TV. And, over the next five years, as more and more cable TV customers either cut the cord or, more critically, never got cable in the first place, happy to let Netflix fulfill their TV needs, Disney was facing declines in a business it assumed would grow forever.

That business was predicated on cable’s monopoly on in-home entertainment; what Netflix offered was an alternative:

Netflix’s path to a full-blown cable TV competitor is one of the canonical examples of a ladder strategy:

Netflix started by using content that was freely available (DVDs) to offer a benefit — no due dates and a massive selection — that was orthogonal to the established incumbent (Blockbuster). This built up Netflix’s user base, brand recognition, and pocketbook

Netflix then leveraged their user base and pocketbook to acquire streaming rights in the service of a model that was, again, orthogonal to incumbents (linear television networks). This expanded Netflix’s user base, transformed their brand, and continued to increase their buying power

With an increasingly high-profile brand, large user base, and ever deeper pockets, Netflix moved into original programming that was orthogonal to traditional programming buyers: creators had full control and a guarantee that they could create entire seasons at a time

Each of these intermediary steps was a necessary prerequisite to everything that followed, culminating in yesterday’s announcement: Netflix can credibly offer a service worth paying for in any country on Earth, thanks to all of the IP it itself owns. This is how a company accomplishes what, at the beginning, may seem impossible: a series of steps from here to there that build on each other. Moreover, it is not only an impressive accomplishment, it is also a powerful moat; whoever wishes to compete has to follow the same time-consuming process.

Another way to characterize Netflix’s increasing power is Aggregation Theory: Netflix started out by delivering a superior user experience of an existing product (DVDs) to a dedicated set of customers, leveraged that customer base to gain new kinds of supply (streaming content), gaining more customers and more supply, and ultimately leveraged those customers to modularize supply such that the streaming service now makes an increasing amount of its content directly.

What Disney is seeking to prove, though, is that it can compete with Netflix directly by following a very different path.

The Onrushing Future

I’ve long argued that the only way to break away from the power of aggregators is through differentiation; it’s why I argued after that Iger earnings call that Disney would be OK — after all, differentiated content is Disney’s core competency, as demonstrated by its ability to extract profits from cable companies.

The implication of Netflix’s shift to original programming, though, isn’t simply the fact that the streaming company is a full-on competitor for cable TV: it is a competitor for differentiated content as well. That gives Netflix far more leverage over content suppliers like Disney than the cable companies ever had.

Consider the comparison in terms of BATNA (Best Alternative to a Negotiated Agreement): for distributors the alternative to not carrying ESPN was losing a huge number of customers who cared about seeing live sports; that’s not much of an alternative! Netflix, on the other hand, can — and is! — going straight to creators for content that viewers can watch instead of whatever Disney may choose to withhold if Netflix’s price is unsatisfactory.1 Clearly it’s working: Netflix isn’t simply adding customers, it is raising prices at the same time, the surest sign of market power.

Therefore, the only way for Disney to avoid commoditization is to itself go vertical and connect directly with customers: thus the upcoming streaming service, the removal of its content from Netflix, and, presuming it is announced, this deal. When the acquisition was rumored last month, I wrote in a Daily Update:

This gets at why this deal makes so much sense for Disney. The company already announced that Star Wars and Marvel content would indeed be a part of the streaming service (that is what was still up in the air when I wrote Disney’s Choice), but the company is absolutely right to not stop there: being a true Netflix competitor means having more content, not less — and that content doesn’t necessarily have to be fresh! Streaming shifted television from a world based on scarcity — there are only 24 hours in the day times however many channels there are, and a channel can only show one thing at a time — to one based on abundance: you can watch anything you want at anytime and it can be different from everyone else.

Moreover, not only does 21st Century Fox have a lot of content, it has content that is particularly great for filling out a streaming library: think The Simpsons, or Family Guy; according to estimates I’ve seen, in terms of external content Fox owns eight of Netflix’s most streamed shows — more than Disney’s six. This content is useful not only for driving sign-ups with certain audiences, but especially for reducing churn; the latter requires a different content strategy than the former.

Whereas Netflix laddered-up to its vertical model and used its power as an aggregator of demand to gain power over supply, Disney is seeking to leverage — and augment — its supply to gain demand. The end result, though, would look awfully similar: a vertically integrated streaming offering that attracts and keeps customers with exclusive content, augmented with licensing deals.

If Disney is successful, it will be a truly remarkable shift: away from a horizontal content company predicated on leveraging its investment in content across as many outlets as possible, to a vertical streaming company that uses its content to achieve higher average revenue from a smaller number of customers willing to pay directly — smaller in the United States, that is; as Netflix is demonstrating, owning it all means the ability to extend the model worldwide.2

The Antitrust Question

I suspect the final hangup in Disney and 21st Century Fox’s negotiations are termination fees: who pays whom if the deal falls through. There is an obvious reason for concern — antitrust. That, of course, gets at some the reasons (but not all) as to why the deal makes sense in the first place. What is fascinating, though, is that the nature of the concern changes depending on the time frame through which one views this deal.

If one starts with a static view of the world as it is at the end of 2017, then there may be some minor antitrust concerns, but probably nothing that would stop the deal. Disney might have to divest a cable channel or two (the company’s power over distributors would be even stronger; basically the opposite of the some of the concerns that halted the Comcast acquisition of Time Warner), and potentially be limited in its ability to make operational decisions about Hulu (Disney would have a controlling stake after the merger; Comcast was similarly restricted after acquiring NBC Universal, but there the concern was more about Comcast’s conflict of interest with regards to its cable TV business competing with Hulu). The Hulu point is interesting in its own right: Disney could choose to focus its streaming efforts there instead of building its own service, but I suspect it would rather own it all.

In addition, Disney and 21st Century Fox combined for 40% of U.S. box office revenue in 2016; that probably isn’t enough to stop the deal, and as silly as it sounds, don’t underestimate the clamoring of fans for the unification of the Marvel Cinematic Universe in swaying popular opinion!

The view changes, though, if you look only a year or two ahead: what I just described above — the “truly remarkable shift” in Disney’s business model — is a shift to vertical foreclosure. The entire point of Disney vastly increasing its content library is to offer that library exclusively on its own streaming service, not competitors’ — especially not on Netflix. Given the current state of antitrust law, which has ignored vertical mergers for years, this would normally be an academic point, except the current state was fundamentally shifted just a few weeks ago, when the Department of Justice sued to block AT&T’s acquisition of Time Warner due to vertical foreclosure concerns.

It’s not a perfect comparison: for one thing, AT&T’s distribution service (DirecTV) already exists, for another, it is impossible to see that acquisition as anything but a vertical one; as I just noted, though, today the Disney-Fox acquisition is a horizontal one. Would the Justice Department sue based on Disney’s potential, as opposed to its reality? And there’s a political angle too: if the AT&T-Time Warner acquisition were indeed blocked as retaliation by the Trump administration against CNN, then it would follow that the administration would be willing to accommodate 21st Century Fox Chairman Rupert Murdoch.

What is most interesting, though, is the long-term view: I have been writing for years that Netflix’s status as an aggregator was positioning the company to dominate entertainment, and it was only eight months ago that I despaired of Disney and the other entertainment companies ever figuring out how to fight back. What has been so impressive over the last few months is the extent and speed with which Disney has seemingly figured it out — and acted accordingly.

Is that a bad thing? Note how much the situation changed once Netflix became a viable competitor for cable TV: competition is a wonderful thing, most of all for consumers. To that end, might it be better for consumers, not-so-much today but ten years from now, if Disney were fully empowered to compete with Netflix? What is preferable? A dominant streaming company and a collection of content companies trying to escape the commoditization trap, or two dominant streaming companies that can at least try to hold each other accountable?

It’s not a great choice, to be honest; certainly Amazon Prime Video is a possible competitor, although the service is both empowered by its business model and also held back. Other tech companies are making noises in the area, but more tech company dominance hardly seems like an answer!

Frankly, I’m not sure of the answer: I am both innately suspicious of these huge mergers and also sympathetic because I see so clearly the centralizing power of the Internet. The big are combining because the giants are coming: if anything, they are already here.

  1. The primary reason Netflix doesn’t have sports content is because it is not evergreen and thus doesn’t provide a cumulative advantage in terms of lowering customer acquisition costs over time; however, not being subject to the one-sided negotiations inherent to sports rights is a nice side benefit []
  2. Just as interesting is the prospective acquisition of regional sports networks and what that means for the future for ESPN; I will discuss this on tomorrow’s Daily Update []

The Pollyannish Assumption

There was an interesting aside about Apple’s bad week that I wrote about yesterday. It turns out that a user posted the macOS login-as-root bug to Apple’s support forums back on November 13:

On startup, click on “Other”

Enter username: root and leave the password empty. Press enter. (Try twice)
If you’re able to log in (hurray, you’re the admin now), then head over to System Preferences>Users & Groups and create a new Admin account.

Now restart and login to the new Admin Account (you may need a new Apple Id). Once you’re logged into this new Admin Id, you can again proceed to your System Preferences>Users & Groups. Open the Lock Icon with your new Admin ID/Password. Assign “Allow user to administer this computer” to your original Apple ID. Restart.

Most of the discussion about this tidbit has centered on the fact that this user later noted that they had found this solution on some other forum — they couldn’t remember which (this reply has now been hidden on the original thread, but Daring Fireball quoted it here); observers have largely given Apple a pass on having missed the posting on their own forums because those forums are mostly user-generated content (both questions and answers) and Apple explicitly asks posters to file bug reports with Apple directly. It’s understandable that the company missed this post two weeks ago.

For the record, I agree. Managing user-generated content is really hard.

The User-Generated Content Conundrum

Three recent bits of news bring this point about user-generated content home.

First, Twitter; from Bloomberg:

Twitter Inc. said it allowed anti-Muslim videos that were retweeted by President Donald Trump because they didn’t break rules on forbidden content, backtracking from an earlier rationale that newsworthiness justified the posts. On Thursday, a Twitter spokesperson said “there may be the rare occasion when we allow controversial content or behavior which may otherwise violate our rules to remain on our service because we believe there is a legitimate public interest in its availability.”

Second, Facebook; from The Daily Beast:

In the wake of the #MeToo movement, countless women have taken to Facebook to express their frustration and disappointment with men and have been promptly shut down or silenced, banned from the platform for periods ranging from one to seven days. Women have posted things as bland as “men ain’t shit,” “all men are ugly,” and even “all men are allegedly ugly” and had their posts removed…In late November, after the issue was raised in a private Facebook group of nearly 500 female comedians, women pledged to post some variation of “men are scum” to Facebook on Nov. 24 in order to stage a protest. Nearly every women who carried out the pledge was banned…

When reached for comment a Facebook spokesperson said that the company is working hard to remedy any issues related to harassment on the platform and stipulated that all posts that violate community standards are removed. When asked why a statement such as “men are scum” would violate community standards, a Facebook spokesperson said that the statement was a threat and hate speech toward a protected group and so it would rightfully be taken down.

Third, YouTube. From BuzzFeed:

YouTube is adding more human moderators and increasing its machine learning in an attempt to curb its child exploitation problem, the company’s CEO, Susan Wojcicki, said in a blog post on Monday evening. The company plans to increase its content moderation workforce to more than 10,000 employees in 2018 in order to help screen videos and train the platform’s machine learning algorithms to spot and remove problematic children’s content. Sources familiar with YouTube’s workforce numbers say this represents a 25% increase from where the company is today.

In the last two weeks, YouTube has removed hundreds of thousands of videos featuring children in disturbing and possibly exploitative situations, including being duct-taped to walls, mock-abducted, and even forced into washing machines. The company said it will employ the same approach it used this summer as it worked to eradicate violent extremist content from the platform.

I’m going to be up front with you: I don’t have any clear cut answers here. One of the seminal Stratechery posts is called Friction, and while I’ve linked it many times this line is particularly apt:

Friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad.

This is exactly the root of the problem: I don’t believe these platforms so much drive this abhorrent content (the YouTube videos are just horrible) as they make it easier than ever before for humans to express themselves, and the reality of what we are is both more amazing and more awful than most anyone ever appreciated.

This is something I have started to come to grips with personally: the exact same lack of friction that results in an unprecedented explosion in culture, music, and art of all kinds, the telling of stories about underrepresented and ignored parts of the population, and yes, the very existence of a business like mine, also results in awful videos being produced and consumed in shocking numbers, abuse being widespread, and even the upheaval of our politics.

The problem is that the genie is out of the bottle: lamenting the loss of friction will not only not bring it back, it makes it harder to figure out what to do next. I think, though, the first place to start — for me anyways — is to acknowledge and fully internalize what I wrote back then: focusing on the upsides without acknowledging the downsides is to misevaluate risk and court disaster. And, for those inclined to see the negatives of the Internet, focusing on the downsides without acknowledging the upsides is to misevaluate reward and endanger massive future opportunities. We have to find a middle way, and neither side can do that without acknowledging and internalizing the inevitable truth of the other.

Content Policing

Go back to the Apple forum anecdote: policing millions of comments posted by hundreds of thousands of posters (I’m guesstimating on numbers here) is really hard, and it’s understandable that Apple missed the post in question; as bad as this bug was, it is still the case that the return on the investment that would have been required to catch this one comment simply doesn’t make sense.

Apple is the easy one, and I started with them on purpose: using a term like “return on investment” gets a whole lot more problematic when dealing with abuse and human exploitation. That doesn’t mean it isn’t a real calculation made by relevant executives though: in the case of Apple, I think most people would agree that whatever investment in forum moderation would be effective enough to catch this post before it was surfaced on Twitter a couple of weeks later would be far better spent buttressing the internal quality control teams that missed the bug in the first place.

That the post was surfaced on Twitter is relevant too; the developer who tweeted about the bug wrote a post on Medium explaining his tweet:

A week ago the infrastructure staff at the company I work for stumbled on the issue while trying to help one of my colleagues recover access to his local admin account. The staff noticed the issue and used the flaw to recover my colleague’s account. On Nov 23, the staff members informed Apple about it. They also searched online and saw the issue mentioned in a few places already, even in Apple Developer Forum from Nov 13. It seemed like the issue had been revealed, but Apple had not noticed yet.

The tweet certainly got noticed, and the bug was fixed within the day. Now to be clear, this isn’t the appropriate way to disclose a vulnerability (to that point, Apple should clarify what exactly happened around that November 23rd disclosure), but broadly speaking, the power of social media is what got this bug fixed as quickly as it was.

Outside visibility and public demands for accountability are what drove the YouTube changes as well: BuzzFeed reported on the child exploitation issue last month after being tipped off by an activist named Matan Uziel who had been rebuffed in his own efforts to contact YouTube. That YouTube was allegedly not receptive to his reach-outs is a bad thing; that there are plenty of ways to raise a ruckus such that they must respond is a good one.

It also gives some outline about how YouTube can better approach the problem in the future: yes, the company is building machine learning algorithms, and yes, the company provides an option for viewers to report content — although it is buried in a submenu:

The point of user reports is to leverage the scale of the Internet to police its own unfathomable scale: there are far more YouTube viewers than there could ever be moderators; meanwhile, there are 400 hours of video uploaded to YouTube every minute.

That approach, though, clearly isn’t enough: it is rooted in the pollyannish view of the Internet I described above — the idea that everything is mostly good but for some bad apples. A more realistic view — that humanity is capable of both great beauty and tremendous evil, and that the Internet makes it easier to express both — demands a more proactive approach. And, conveniently, YouTube already has tools in place.

YouTube’s Flawed Approach

On Google’s last earnings call CEO Sundar Pichai said:

YouTube now has over 1.5 billion users. On average, these users spend 60 minutes a day on mobile. But this growth isn’t just happening on desktop and mobile. YouTube now gets over 100 million hours of watch time in the living room every day, and that’s up 70% in the past year alone.

A major factor driving this growth is YouTube’s machine-learning algorithm for watching more videos; as BuzzFeed noted:

Thanks to YouTube’s autoplay feature for recommended videos, when users watch one popular disturbing children’s video, they’re more likely to stumble down an algorithm-powered exploitative video rabbit hole. After BuzzFeed News screened a series of these videos, YouTube began recommending other disturbing videos from popular accounts like ToysToSee.

The recommendation works hand-in-hand with search which — as you would expect given its parent company — YouTube is very good at. Individuals that want disturbing content can find what they’re looking for, and then, in the name of engagement and pushing up those viewing numbers, YouTube gives them more.

This should expose the obvious flaw in YouTube’s current reporting-based policing strategy: the nature of search and recommendation algorithms is such that most YouTube viewers, who would be rightly concerned and outraged about videos of child exploitation, never even see the videos that need to be reported. In other words, YouTube’s design makes its attempt to leverage the Internet broadly as moderator doomed to fail.

Those exact same search and algorithmic capabilities, though, made it trivial for Uziel and BuzzFeed to find a whole host of exploitive videos. The key difference between Uziel/BuzzFeed and generic YouTube viewers is that the former was looking for them.

Herein lies the fundamental failing of YouTube moderation: to date the video platform has operated under the assumption that 1) YouTube has too much content to review it all and 2) The best way to moderate is to depend on its vast user base. It is a strategy that makes perfect sense with the pollyannish assumption that the Internet by default produces good outcomes with but random exceptions.

A far more realistic view — because again, the Internet is ultimately a reflection of humanity, full of both goodness and its opposite — would assume that of course there will be bad content of YouTube. Of course there will be extremist videos recruiting for terrorism, of course there will be child exploitation, of course there will all manner of content deemed unacceptable by the vast majority of not just the United States but humanity generally.

Such a view would engender a far different approach to moderation. Consider this paragraph from YouTube CEO Susan Wojcicki about YouTube’s latest changes:

We understand that people want a clearer view of how we’re tackling problematic content. Our Community Guidelines give users notice about what we do not allow on our platforms and we want to share more information about how these are enforced. That’s why in 2018 we will be creating a regular report where we will provide more aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies. We are looking into developing additional tools to help bring even more transparency around flagged content.

Make no mistake, transparency is a very good thing (more on this in a moment). What is striking, though, is the reliance on flags: YouTube’s current moderation approach is inherently reactive, whether it be to viewer reports or, increasingly, to machine learning algorithms flagging content. Machine learning is a Google strength, without question, but ultimately the company is built on giving people what they want — including bad actors.

Understanding Demand

A core precept of Aggregation Theory is that digital markets are driven by demand, not supply. This, by extension, is why Google and Facebook in particular dominate: in a world of effectively infinite web pages, the search engine that can pick out the proverbial needle in a haystack is king. It follows, then, that a content moderation approach that starts with supply is inherently inferior to one that starts with demand.

This is why it is critical that YouTube lose its pollyannish assumptions: were the company’s moderation approach to start with the assumption of bad actors, then child exploitation would be perhaps the most obvious place to look for problematic videos. Moreover, we know it works: that is exactly what Uziel and BuzzFeed did. If you know what you are looking for, you will, thanks to Google/YouTube’s search capabilities and recommendation algorithms, find it.

And then you can delete it.

Moreover, you can delete it efficiently. Despite my lecture about humanity containing both good and evil, I strongly suspect that the vast majority of those 400 hours uploaded every minute contain unobjectionable — even beautiful, or educational, or entertaining — content. What is the point, then, of even trying to view it all, a Sisyphean task if there ever was one? Starting with the assumption of bad actors and actively looking for their output — using YouTube and Google’s capabilities as aggregators — makes far more sense.

That, though, means letting go of the convenient innocence inherent to the worldview of most tech executives. I know the feeling: I want to believe that the Internet’s removal of friction and enabling of anyone to publish content is an inherent good thing, because I personally am, like these executives, a massive beneficiary. Reality is far more complicated; accepting reality, though, is always the first step towards policies that actually work.

Facebook, Twitter, and Politics

I would like to end this essay here; alas, most content moderation policies are not so clean cut at YouTube and child exploitation. That is why I included the Twitter and Facebook excerpts above. Both demonstrate the potential downside of the approach I am recommending for YouTube: being proactive is a sure recipe for false positives.

I am reminded, though, of the famous Walt Whitman quote:

Do I contradict myself?
Very well then I contradict myself,
(I am large, I contain multitudes.)

It is impossible to navigate the Internet — that is, to navigate humanity — without dealing in shades of gray. And the challenges faced by Twitter and Facebook are perfect examples. I, for one, found President Trump’s retweets disgusting, and Facebook’s bans unreasonable. On the other hand, who is Twitter to define what the President of the United States can or cannot post, and Facebook is at least acting consistently with their policies.

Indeed, these two examples are exactly why I have consistently called on these platforms to focus on being neutral. Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.


The phrase “With great power comes great responsibility” is commonly attributed to Spider-Man, but it in fact stems from the French Revolution:

Ils doivent envisager qu’une grande responsabilité est la suite inséparable d’un grand pouvoir.

English translation: They must consider that great responsibility follows inseparably from great power.

Documenting why and how these platforms have power has, in many respects, been the ultimate theme of Stratechery over the last four-and-a-half year: this is a call to exercise it, in part, and a request to not, in another. There is a line: what is broadly deemed unacceptable, and what is still under dispute; the responsibility of these new powers that be is to actively search out the former, and keep their hands — and algorithms and policies — off the latter. Said French Revolution offers hints at fates if this all goes wrong.