https://centrocompetencia.com/wp-content/themes/Ceco

oder-de-mercado-digital

Herbert Hovenkamp en el Día de la Competencia FNE 2025 (transcripción)

31.12.2025
CeCo Chile
10 minutos
Claves
Keys

La presente nota es una traducción de la charla dictada por el connotado abogado y académico, Herbert Hovenkamp titulada “Antitrust in Dominated Digital Markets: U.S. Law Through 2025”, en el marco del 22° Día de la Competencia de la FNE, llevado a cabo el pasado 11 de diciembre.


Good morning. I can’t tell you how happy I am to be here, and how impressed I’ve been with both Chile as a country and with your antitrust division here and all the wonderful work that they’ve been doing. I hope my comments can add a little bit to what you’ve already been doing.

I want to talk today a little bit about antitrust policy in digital networks on the internet. My comments will be fairly general. They’ll be somewhat U.S.-focused because that’s where I do most of my work. But today we think of digital networks as both dominant and collaborative. The ones that get into trouble the most frequently are the dominated ones, but the others can get into trouble as well.

Two examples of collaborative networks that are dominant in their markets are the telephone system, which supplies virtually universal telecommunications. However, the system itself is operated by many, many firms. Many of them are in competition with each other, such as, for example, the cellular phone carriers, and even though they are agents within the same network, we have no difficulty treating their various agreements as collaborations that are reachable —in the case of the U.S.— under Section 1 of the Sherman Act.

Email, one that we don’t think about very much, is also a dominant network in which literally thousands of firms, called clients, supply email services with a certain amount of standard setting and collaboration so that messages can arrive safely at the place where they are sent. There is no dominant email carrier. Apple Mail is quite large. Gmail is quite large. Outlook, Microsoft’s Outlook, is actually quite a bit smaller, but it is very much a collaborative network in which people can have multiple email addresses. They can switch from one to another, and it actually has not imposed very many antitrust problems.

Now, with respect to dominated networks, those are firms like Microsoft, Google/Alphabet, Apple, those are networks that were largely created by, and where most of the decision-making is directed by, a single firm. Microsoft created the Microsoft ecosystem, which includes the Windows operating system, its browser Internet Explorer, and then its search engine Bing, and Microsoft Office. They are all designed so that they work quite well together. They share some of the same commands.

But in many of these dominated networks, a good example is Amazon, many of the constituencies are themselves agents with independent business powers. So, for example, two-thirds of the business conducted on Amazon is conducted by independent merchants. Subject to some limitations, they have the power to set their own prices. Price fixing among them is reached under the antitrust laws, as would any other agreement regarding exclusion or standard setting or anything like that. So while these things are single networks, for many, many antitrust purposes we treat them as collaborations by a multiplicity of firms.

And finally, some networks are dominated but not dominant in their markets. For example, just two weeks ago, Judge Boasberg concluded that Facebook was not a dominant network, at least for purposes of Section 2 of the Sherman Act, after he looked at substitution data —the data about how customers respond to price or quality changes within the network and how quickly they can switch to other alternatives— and importantly which alternatives they switched to. He concluded that YouTube and TikTok should have been put into that market along with the firms that the FTC acknowledged, and as a result, even excluding YouTube, Facebook’s market share fell below 54%, and Judge Boasberg could not find any U.S. antitrust case that has ever condemned a firm for monopolization on a market share no higher than 54%.

Interestingly, the highest degree of substitution in the Facebook case occurred with YouTube, where most people substituted when they left Facebook for another site. If YouTube had been included in the relevant market, then the market share would very likely have dropped well below 50%.

Okay. One of the first questions you confront in dealing with networks is: when do they have market power? Remember, market power is the power to raise price above one’s cost without very quickly losing so many sales that the price increase is unprofitable and has to be rescinded. If a firm can maintain prices above cost for a significant amount of time —how long? Some people say a year— doesn’t matter so much as long as we understand that it’s substitution away that denies firms market power.

There’s been a great deal of anxiety and hand-wringing in the literature over how you measure market power on large digital networks. First of all, they are two-sided markets, and one effect of that is that things that look suspicious on one side may have purely innocent explanations when you look at the other side. Another factor is that firms usually have different market shares on the two sides of a two-sided market.

One example is Google Search. Google Search until relatively recently had a market share in the low to mid-90% range—very clearly a dominant firm if you look at consumer usage. On the other hand, it’s responsible for about 25 to 30% of advertising. And there’s a pretty good explanation for that: people using digital search don’t have very many good alternatives to digital search. They have old-fashioned paper searches, which are clearly inferior. AI may at some future point provide some other alternatives, but it hasn’t really done much of that yet. So Google Search is very dominant on the user side of the market.

On the other hand, digital advertising —even assuming digital advertising is a relevant market; that is, it competes with offline advertising as well— can occur in a wide variety of programs, websites, and as a result, Google divides that market with, or shares that market with, many other people.

Now, one of the unusual features of a network like Google Search is that historically you think you measure market power by following the money. Well, in the case of a two-sided market, the user side is free, in most cases, to users. The money comes in from the advertisers. Nevertheless, in its government suit against Google Search, the government stuck to a strict market share of usage, ignoring anything on the other side. It did very largely the same thing in the Google ads case, where it stuck to a market share of advertising. The Facebook case measured market share strictly in terms of consumer usage.

And then the question is: well, aren’t those all wrong? And I should hasten to add that most other antitrust authorities do the same thing. Isn’t it wrong simply to say Google Search’s market share is 90% simply by looking at search usage? And the answer is: I think it’s correct.

We’ve had a fair amount of stumbling in the U.S. over how you define markets and measure power on two-sided platforms. In the American Express case a few years ago, the Court looked at the vast literature about two-sided markets and concluded that a market share measure needed to include both sides and then concluded that transactions was the relevant way to measure a market. Well, the problem with transactions is that every transaction includes both a buyer and a seller, and buyers and sellers are complements to one another, not substitutes. And you don’t put complements into the same market. Markets are composed of substitutes.

I think as a prima facie matter, the antitrust agencies who look simply at one side of the market are probably doing it correctly, even if it oversimplifies reality a bit. First of all, a very important thing to remember is that market power attaches to products, not to firms, and so by the same token, market power attaches to products, not to platforms. So you can’t simply talk about whether a platform has power.

I’ll give you an example. Microsoft has around 60 to 70% of the market for operating systems for desktop and laptop computers. It has something like 4% of the market for search. Its Bing search engine runs in the 4 to 5% range. It has barely a tenth of a percent in the market for operating systems for handheld devices, where it plays a very, very distant third to the Apple system and the Android system.

So you can’t simply talk about the market power of the platform. You need to talk about the market power of the particular product that a firm is selling. And that’s what’s going on in a case like Google Search. So the product in that case is search. The injury is visited on consumers, and that makes simply looking at usage—one side has to be the side where the harm is located. Simply looking at one side, the side where the harm is located, is at least the correct presumptive way to approach market definition in antitrust cases involving large digital platforms.

I say presumptive because this conclusion is a little bit less secure than it is with respect to traditional markets. And as a result, I think a defendant should always have an opportunity —but also the burden of proof— to show that a market share number derived from usage on one side of a platform exaggerates that firm’s power as a result of something that’s happening on the other side.

That certainly can apply to practices. For example, you can’t understand a practice like predatory pricing without looking at cost-price relationships across the platform rather than at just one side. Just to get an idea of how far we’ve come in this area: when the original Microsoft case was being litigated in the late part of the 20th century, the government actually alleged that Internet Explorer, which was Microsoft’s browser at that time, was being subjected to predatory pricing—was being sold at a predatory low price—and its claim was based on the fact that Internet Explorer was being given away for free, completely ignoring the fact that Internet Explorer’s revenue was coming in from the other side of the market.

Today we don’t usually make such mistakes. And of course there’s a vast number of two-sided markets that are given away, in which one side is given away for free, and we know better than to say they are guilty of predatory pricing. You have to compare revenues and costs looking across the entire thing.

But for purposes of market share, we are interested in the product, not in the platform as a whole. And I think that’s always the best way to start.

Another enormous qualifier—which the courts have not been doing as well at—is the qualifier on power when it’s assessed from market share and we’re talking about digital markets. In most antitrust jurisdictions, there’s a very long tradition of measuring market power by relation to a firm’s share of a well-defined market. We do it, you do it, pretty much everybody does it.

We also have these alternative direct measures, which are robustly used in areas like merger policy—much less in areas like dominant firms, like Section 2 of the Sherman Act. But as a general rule, market power is measured by so-called indirect methods that infer power from a very large market share.

The fact, however, is that the market share/market power equation actually includes three variables, not just one. Incidentally, this very well-known but quite ancient law review article now by Landes & Posner in the 1980 Harvard Law Review specifies the formulas and shows that you can actually predict a firm’s markup provided that you know what the three variables are.

One of the three variables is market share. The second variable is the elasticity of demand that that market faces with other firms—which is simply another way of saying that a market has to be well defined. What does that mean? Well, that means that if it’s perfectly defined, there would be no substitution whatsoever between things inside of the market and things outside of the market. That virtually is never the case. So what we look for is areas where the degree of substitution is relatively small. And as the market is better and better defined, market definition becomes a better metric for assessing market power. With a poorly defined market, you can prove practically anything you want.

I think that was one of the errors made in the Facebook case. The government, the FTC, got all wrapped up in accusing Facebook of dominating a social network site for networking that was attractive to friends and families—so a kind of socially oriented networking network. What it should have done is looked much more closely at actual substitution rates.

The lay intuition about markets is that you compare things that look alike and put them in the same market on the basis of physical properties. That’s not a very economically serious way to approach market definition. The better way to do it is really behavioral: you look at the responses by consumers. And that’s what Facebook was able to convince the court about in the Facebook case.

What it did is it took evidence from various price or quality shocks that Facebook had experienced over the past several years. What that means is situations where—now Facebook is free, so the prices are really quality-adjusted prices—where the quality of some feature went down, or where a feature was removed, or on the other side where a competitor’s feature improved. So you look at these different differentials in quality, price, features. It also looked at a few periods where Facebook went offline for technical reasons, and always asking the same question when these things occurred: how did Facebook’s customers behave?

And the first thing you find is that many, many of them looked for alternatives, but they did not all look for alternatives in the same place. The highest number by far switched from Facebook to YouTube. That was a little bit surprising to people because normally we don’t think of YouTube as a social network. We think of it as a place you go to watch videos. I’ll get back to that in a few seconds.

The second place they went to was TikTok, a new entrant firm that hasn’t even been around and has not been available in the U.S. market for more than five or six years. And actually customers went to those two alternatives in greater numbers than the firms like Instagram and Snapchat and MeWe, which the FTC alleged were included in the relevant market.

So looking at that substitution data, the court said: well, clearly YouTube and TikTok need to be included in the relevant market. Now, one particular thing that led to that conclusion by Judge Boasberg is that he concluded that while Facebook may have originated as a pure friends- or family-oriented social network site, it had evolved very, very significantly in recent years. And it was principally today a place where people went to look at shorter videos.

That is, if you measure once again not by looking at features but by looking at time spent—how many hours did Facebook users spend in various activities—the plurality of their usage came in watching videos. And of course that can explain why alternative firms like YouTube and TikTok were properly included in the market.

But once he did those things then Facebook’s market share fell to 64% or less, and Judge Boasberg could not find any U.S. decision that has actually condemned a firm for a Section 2 violation on a market share below the mid-70s. There’s a little bit of dicta in a couple cases that suggests going further, but no courts that actually condemn it. And as a result he held that Facebook did not have a sufficient market share based on that particular market definition.

Now the third variable in market definition is the elasticity of supply of competitors or potential competitors. And that’s simply a reference to the ease and speed at which firms can respond to a competitor’s price increase or decrease or quality change.

To take an extreme example: if a firm had a 40% market share, which is not in dominant-firm territory, but every single one of its rivals was completely capacity constrained—which means it had no power whatsoever to increase its own output—well, in that case even a firm with a 30 or 40% market share would have some power over price. It would be able to reduce its own output and market output would go down as well because the competitors would not be able to fill the defendant’s reduced output in with their own increased output.

On the other hand, if every firm in the market is able instantly, almost costlessly, and without significant limitation to add users or add usage time to its own offerings, then even a firm with a relatively high market share lacks power. One feature of digital markets, of course, is that entry can be hard—and it is for many of these large digital platforms—but the cost of adding an additional user is typically extremely low, approaching zero. There might be a few telecommunications costs or recordkeeping costs, but signing up an additional user to a platform that already exists imposes very, very low costs.

Furthermore, for users to add hours to their current usage in response to someone else’s changes of policy is also extremely easy. It’s not like older style tactile markets where, you know, if the price of cars goes up—Ford raises the price of cars—can Toyota and General Motors fill in with their own to keep prices level? Well, they’re going to have to build more cars. They may have to build more plants before they can build those cars.

Digital markets typically do not confront anything like the same set of capacity constraints. To the extent that they can very quickly and at very low cost pick up any output reduction on behalf of the largest firm, then that firm—then that particular market share—seriously exaggerates its market power.

So my own belief is that Facebook’s effective market power was really much less than its 54% market share indicated, because all of its rivals—YouTube, TikTok, Snapchat, all of them—had the capacity to increase usage very, very quickly and at very low cost.

Very importantly here is that these social networks are all subject to multi-homing, which means people can be associated with more than one of them. Many of us—including younger people, particularly younger people—are. So you can have YouTube, TikTok, Facebook, and everything on your cell phone and switch instantly from one to the other. If you’re not a member of TikTok, it takes you a few seconds to join. If you are a member, you can very quickly switch your allegiance from Facebook to TikTok in response to a price or quality change. Under that set of circumstances, even a high market share does not indicate a substantial amount of power.

Okay. Another issue that comes up with respect to digital platforms is the distinction between unilateral and collaborative conduct. Our own agencies have been obsessed with treating these various antitrust actions against large digital platforms as if they involve unilateral conduct. I think that cost the FTC its victory in the Facebook case. I’ll get to that issue in just a second.

But the fact is that even dominated networks typically act through agents or constituents with whom they have either contractual relationships or licensing arrangements. And for both of those there is conspiratorial capacity. That is, we think of the arrangement between the two parties to a contract as an agreement under Section 1 of the Sherman Act. And we also think of the relationship between two parties to an intellectual property license as an agreement. And so all of these cases could have been brought as collaboration cases.

In the Google Search case, the principal claim was that Google was paying enormous amounts of money—as much as in the mid-20s, billions of dollars annually—to Apple to make Google Search its default search engine. But for smaller companies it was always the same thing: it was a payment of a large amount of money to get Apple or some other implementer to behave a certain way vis-à-vis Google Search. It was a classical restraint of trade: a market division agreement for payment, or payment to stay out of a market, payment to limit one’s output in a market. Those are all things that classically fall under Section 1 of the Sherman Act. And when that happens, they are condemned under much more aggressive standards. We can go after things like joint ventures on 30 to 40% market share, where for unilateral conduct we insist on a much, much higher number.

Same thing was true of the Google ad tech cases. All of them involve arrangements for buying and selling of advertisements, and as a result they all involved multilateral conduct.

The Facebook case, by the time it went through the litigation process, was really reduced to two issues. One of which was the acquisition of Instagram, one of the fellow networks that the FTC conceded was in the market with Facebook. Of course, Facebook owns it. And then the other one was WhatsApp, a very popular commonly used chatting application that Facebook bought about 10 years ago.

Those two acquisitions were completely challengeable under either Section 1 of the Sherman Act or, today, more likely Section 7 of the Clayton Act. And if they were challenged, they could have been prosecuted on much, much lower market shares. That is to say: once the FTC decided to bring the Facebook case exclusively under Section 2 of the Sherman Act, it was held to the structural standards that Section 2 requires. And that’s why Judge Boasberg dismissed the complaint after concluding that the defendant’s market share was only in the low 50s percent.

We routinely condemn mergers on much, much lower market shares. Our own merger guidelines in the U.S. will condemn a merger where the market share of the post-merger firm exceeds 30%, or in more concentrated markets where the HHI—the Herfindahl reading—exceeds 1,800, which can very frequently lead to merger illegality of firms in the 10, 15, 20% range depending on the structure of the remaining market.

I don’t think it’s a sure thing that the FTC would have won the case had it proceeded under Section 7 of the Clayton Act, but I think it would have had a far, far better chance given what Judge Boasberg eventually concluded about Facebook’s market share.

The ironic—and I think quite indefensible—thing in the Facebook case is that that case is governed by our Federal Rules of Civil Procedure. Those rules strongly, strongly encourage plaintiffs to combine their various causes of action based on the same operative facts into a single complaint. That is, we don’t want people to keep writing seriatim complaints; we want them to put all of their counts together.

And so the sensible way to bring a complaint in FTC v. Facebook would have been for the FTC to issue a two-count complaint: one count under Section 2 of the Sherman Act, which is the count that it used, but then a second count under Section 7 of the Clayton Act, in which the court would then be invited to adopt Clayton Act merger standards rather than dominant-firm standards. And I think there’s every possibility that the court would have come out the other way on the Section 7 issue even though it ruled against the FTC on the Section 2 issue.

With respect to other unilateral conduct, one thing about digital platforms is that they tend to blur the line between unilateral and collaborative conduct. And one of the principal ways they do that is through the use of computer code. Pretty much anything you can achieve between two parties by means of a contract you can also do for a digital platform by means of properly developed code.

So, for example, the reason that you as an iPhone or Android user are tied historically to either Apple’s App Store for your software purchases or Google Play for your software purchases on the Android—the reason you’re tied to them is not because you signed an agreement, not because you accepted a tying agreement between the phone platform and the purchases of your software. No—it’s because those exclusivity provisions are written into the code and you cannot install a third-party app without judicial intervention. You cannot install a third-party app for accepting, processing payments on those phones.

And you can kind of go right across the board with tying agreements on digital platforms and see that the great majority of them are not executed by contract. They are executed by computer code, which customers lack the skills to change.

That has a couple of important implications. One is that tying arrangements given effect by computer code are more durable and easier to enforce. If you’ve got to enforce a contract requiring customers to behave a certain way, and then you’ve got to be able to detect their violations—we’re talking about hundreds of millions of customers—and that can get to be quite problematic. In various industries firms have had a great deal of difficulty using classical tying arrangements to manage customer behavior.

A good example is the repeated efforts of computer printer manufacturers to tie their printers to their own ink cartridges, which if they do by contract means they’ve got to be able to catch people who use non-standard or generic ink cartridges. And so what they did instead is that many, many printer manufacturers today switched to digital codes—perhaps a code reader on both the printer receptacle and the ink cartridge that need to connect with each other before that particular cartridge will function in that particular printer.

Once they do that, however, it becomes what we call a tech tie or technological tie. It is unilateral conduct. And I think one of the great ironies—one of the unfortunate ironies—about antitrust policy is that we tend to treat technologically created tying arrangements, or tying arrangements created by computer code, much more gently than we treat tying by agreement, for the simple reason that a tie accomplished by computer code is a unilateral act, and as a result it needs to meet the standards for Section 2 of the Sherman Act, which requires a dominant firm.

If a non-dominant firm ties by code, it really cannot be condemned under Section 2. On the other hand, contractual tying—just as the law of most rule-of-reason collaborations and joint ventures—will reach down into the 30–35% market share range in order to find illegality.

This whole question of the difference between unilateral and collaborative conduct has come up in a lot of different ways. Defendants do everything within their power to turn conduct into something that is unilateral, of course, because that minimizes their liability exposure. Plaintiffs always look for ways to make the conduct collaborative because that can give them increased enforcement power, including perhaps a shot at the per se rule, which is frequently what they need in order to win their cases.

Here of course in the U.S., the real stumbling block is the Supreme Court’s Trinko decision, which held that basically a firm —even a monopolist— has no duty to deal with its rivals. The result of Trinko has been the virtual kiss of death to unilateral refusal-to-deal claims. That raises some very interesting policy issues, one of which is: is it the right rule for networks? Because networks are almost always built on an understanding.

A network is different than, say, a firm and its wholly owned subsidiaries. Under U.S. law, a firm and its wholly owned subsidiaries will be treated as a single entity and, as a result, any decisions made within that group would be counted as unilateral. On the other hand, in a network, the network’s relationships with its various agents—which it doesn’t own, and there are many of those—will be either by contract or by license agreement, and those are collaborative. And so that does raise this important question: should we think about expanding or softening Trinko to impose greater duties to deal on firms and participants in dominated network markets?

One characteristic of these markets is that they are frequently dominated by one firm that does business in certain areas, but then they have third-party agents who also do business in those areas, in competition, operating under license agreements or contracts. Then later on, when the dominant firm changes its mind and pulls the plug, does Trinko preclude liability in those cases? I believe this is an issue that needs substantial rethinking, but that the Trinko rule is too harsh when we think about networks.

Now I should add that Trinko itself arose within the telecommunications network. The two real parties in interest in Trinko were Verizon, which was an incumbent local exchange carrier at that time, moving on into nationwide carriage and cellular carriage, and its rival was really a subsidiary of AT&T. And when the AT&T subsidiary began functioning less well, the plaintiff alleged a breach of Verizon’s duty to deal, and the Court held that there was none. It never even addressed the fact that the two telecommunications parties were both participants in the same network.

One way to get around the problem is with a little bit of an expansion of the Aspen Skiing rule. The Aspen Skiing rule survives—Trinko expressly declined to overrule it—but Aspen Skiing involved a situation where you had two firms that had originally been involved in a collaborative program. It was very low tech: it was ski lift tickets in Aspen, Colorado. The two competing ski companies had a program for sharing a common “all Aspen” ticket. And then the larger of the two ski companies reneged on that promise and basically left the plaintiff with only a single mountain and a single ski lift and its market share plummeted.

And the Supreme Court held that if there was an original collaborative agreement—totally voluntary—there was an original voluntary agreement, and then the dominant party breached that agreement, reneged on that agreement without a good business justification, then that particular refusal to deal could be reachable under the antitrust laws. And with a little bit of tinkering, you can tell a variation of that story with respect to many, many network cases in which parties get dumped even after they’ve had collaborative agreements.

Today, the biggest Trinko issue currently in front of the Supreme Court—although I think they’re starting to dry up because the Court denies more of them—has to do with whether the Trinko rule applies to remedies. So the issue is: okay, Trinko says there is no generalized duty to deal with rivals. And now the question is: can an antitrust remedy be fashioned that requires a dealing order?

Most recently in the Ninth Circuit, in a case called Epic Games—Epic Games is a large maker of digital games that wanted access to the platforms of both Apple and Google devices and eventually won in the Google case and won under state law but not under U.S. law in the Apple case—one of the issues in fashioning a remedy was whether the court could fashion a remedy that required either Apple or Google to deal with Epic Games’ demands.

What Epic Games really wanted to do is something called side-loading, or enable users of Epic Games to purchase higher game levels or other accessories directly through the gaming website without having to go through Google Play (in the case of Android phones) or the App Store (in the case of Apple phones). And the Ninth Circuit said basically that’s fine. The severe limitation on duties to deal in Trinko applies to a dominant firm’s duty to deal in the market generally. It does not limit the ability of a court to fashion a remedy in a case where an antitrust violation has already been found. In these cases, it was tying violations, not just refusals to deal. And in that case, the Ninth Circuit said Trinko was not a bar to an effective remedy. I’m not sure we’re going to see that one resolved this year, but that is one of the issues that still remains in front of the courts.

Okay. Then on remedies in general in dominated digital markets: a few general observations.

Number one: these are civil antitrust cases. Their goal—the remedial goal, once a violation has been found—is not to punish the firm. It is certainly not to ruin the firm, remembering that ruining a firm hurts stockholders, customers, employees, and hundreds of other firms that may be in dealing arrangements with it. Rather, it is to restore competitive conditions.

The government’s authority to create an antitrust remedy comes from 15 U.S.C. Section 25, which gives the government the authority to use the courts to “prevent and restrain” an antitrust violation. That is absolutely all that the remedial statute says. It doesn’t say anything about breakups. It doesn’t say anything about when an injunction is permissible or when some kind of structural relief is called for.

It does have a provision that authorizes the government to request—“request that such violations be enjoined or otherwise prohibited”—that creates a statutory presumption that favors injunctions. That is to say, the default or preferred remedy in the case of a Section 2 violation should be an injunction.

And as a result, this stampede of people—and in the U.S. it’s a big stampede on the antitrust left—of people who write books with titles like “Break Them Up,” because they’re always looking for opportunities to break up firms: breakups are not the preferred remedy within the U.S. system. Injunctions are. However, that same language goes on to say they shall be enjoined or otherwise prohibited. It authorizes the courts to go beyond an injunction; it just doesn’t give out any specifics about what going beyond entails and when that might entail structural relief such as a divestiture.

Here are some of the problems, and why I don’t as a general matter favor divestitures for digital markets. One of the most common ones is that the requested divestitures almost never break up the monopoly.

I’m going to come back to this a little later when we talk about Google Search, but the requested divestitures almost never break up the monopoly. Even the requested divestiture in the Google Search case asked for a spin-off of Chrome. Well, Chrome is a browser. Search engines live in browsers. They’re installed in browsers. About the only connection that Judge Mehta could find is that practically all installations of Chrome, which is a Google product, also included Google Search as the default search engine. But that was true of Apple products as well, thanks to the big payment Apple was making: iPhones and iPads and so on also come with Google Search installed as a search engine.

More importantly, however, the particular asset that gives Google Search its monopoly—which is its files database, currently at 400 billion pages and counting, this complete copy of the internet that Google is constantly manufacturing by scanning new and changed websites—there’s no way to break it up. There’s no way to break it. You can’t break it in half. Each half would be largely worthless. And the more pieces you tried to break it into, the more worthless it would be.

Now, you could license it, and I want to get back to that later because I think that’s a pretty important alternative. But one of the problems with breakups of digital firms: how would you break up Facebook? You put all the boys on one side and all the girls on the other one? You think the world would be a better place if we did that? Or you could break it up by hemisphere, which would mean I would not be able to talk to my Chilean friends and vice versa. You could break it up by saying: “Okay, pictures and videos go to Facebook A, and verbal messages and other things go to Facebook B.”

The fact is there is no way to break up Facebook that would not do severe and very likely fatal harm to its operational capabilities. There might be a way to license the ability to make a copy of Facebook. The court never got to that because it never found a violation. But the one thing you cannot do is break up.

When we talk about breakups in markets, we use this mental illustration of a farm where you’ve got a large farm growing corn, and if you want, you could break it into three or four pieces and you’d end up with three or four smaller farms. Each of them would keep on growing corn and they would be in competition with one another. It just does not work that way in digital markets. It’s a truly rare case where a digital product can be broken up without doing severe damage to the quality or operational capabilities of the product itself.

Another historical problem with physical breakups is the one that the United States already confronted in the 1911 Standard Oil case. Now that was the biggest structural antitrust case up to its time. The Sherman Act was only 20 years old; it wasn’t even the Clayton Act yet. Standard was accused of monopolizing a nationwide market for petroleum products. Standard had been formed by a trust, a common law trust agreement that placed a number of individual companies under its common control.

The thing is: those individual companies were all the product of state corporate law that limited the productive assets of a corporation to its own state. And you can see that when you look at what happened to Standard at the time of the breakup. One of the pieces that was broken off was SoCal, Standard of California. Another one was Exxon, which became Standard of Texas. A third was Socony, which was Standard of New York. There was a Standard of Ohio, a Standard of Indiana, and so on.

Each of these broken-up firms fully retained its regional monopoly status, except over a smaller geographic territory. And so the result was you could not predict higher output and lower prices. All you could predict is that instead of having one nationwide monopoly, we now have 33 monopolies that cover smaller geographic territories. But they are still monopolies, and for the most part they did not compete with one another at all. That is, the Standard of California’s facilities competed in California; Standard of Ohio in Ohio; and so on.

Licensing, on the other hand—compulsory licensing as a remedy—has the capacity to create perfectly competitive rivals. So, for example, if I give six firms each a non-exclusive license to the Google files index database, I will have created six firms which at that instant would be perfectly capable of competing with each other for Google Search. You never get that kind of result from a physical breakup in an old-economy market.

The Grinnell case did the same thing. The AT&T settlement—which broke AT&T—did a lot of different things, but one thing the AT&T settlement did is it broke AT&T into seven regional Bell operating companies in seven different parts of the country. Most of them have merged back together by today. But that’s not a recipe for improvement: to take “we’re going to keep the monopoly, we’re just going to divide it up into seven regions,” and then we’re going to have these very elaborate, difficult-to-enforce interconnection agreements to make sure that we still get one network out of the deal. That’s not a very good recipe for increasing competition in a network.

Now vertical breakups are sometimes more justifiable, particularly if they create opportunities—if vertical control creates opportunities—for exclusive dealing. So there could be situations in which a vertical breakup is justified.

There are several alternatives to divestiture. First of all, injunctions. I think should not be overlooked, particularly if they do the right things.

So, for example, the Microsoft case: the government wanted a breakup. This is such an old story I get tired of telling it. The United States government is breakup-happy. They want breakups for everything. They have always asked for them ever since the Standard Oil case. They wanted a breakup in the Microsoft case. They wanted a breakup in the truly disastrous case against IBM, brought in the 1960s and finally dismissed in the early 1980s. And they have requested some kind of breakup in all of the current round of digital platform Section 2 cases.

In the Microsoft case, the D.C. Circuit declined the request for a breakup. What it did do was it basically rewrote a whole lot of Microsoft’s licensing agreements and contracts with various implementers. These were people like internet access providers—AOL, that sort of thing. It rewrote them so as to remove their exclusivity provisions. Originally, those agreements all either favored Internet Explorer, which was Microsoft’s own browser, or else insisted on Internet Explorer, and the court basically opened those up. And probably as a matter of luck, Google was kind of waiting in the wings with its own browser and moved right on in. Microsoft’s share of the browser market fell quite dramatically. Its share of the operating system market did not fall as much. But the Microsoft case was not an attack on the operating system market position. It was an attack in which the harm was alleged to occur in the browsing market, and that market became substantially more competitive until today. Microsoft remains a fairly minor player in the browser market.

Compulsory licensing I’ll get to a little bit, and interoperability. Data sharing is one thing, and particularly non-exclusive data sharing is one thing I think courts should be looking at more of, because it enables—you know, one feature of digital production which is dramatically different from old-economy production is that digital productive assets typically cannot be divided, but they can be shared.

Most old-economy assets are just the opposite. If you want to break up Ford and it has six production facilities, you would spin off three of them. You wouldn’t think of asking Ford to rent space in its plants to Chrysler or Toyota or someone else. But in digital markets those kinds of solutions tend to work much better. That is, sharing these facilities is much better than trying to break them off.

As I noted a few minutes ago, sharing can in fact produce full head-to-head competition. I think that’s one of the genius aspects of Judge Mehta’s decree in the Google Search case. If it works out—there’s a lot of experimentation, a lot of uncertainty—but if it works out, that decree will create a roughly—we don’t know the exact number yet, but a small number of qualified firms; people think it’s somewhere in the range of a half dozen—who will receive a full but non-exclusive right to the entire Google Search files database. Which would mean at that point, at that instant, they would be perfect head-to-head competitors.

However, from that point on, each of them would then have to develop its own improvements and updates individually. And so he’s looking to do two things at once. He’s trying to create more real head-to-head competition in search, but also create a situation in which there’s more room for product differentiation in search markets.

In fact, we think one of the principal reasons that we have a dominant firm in search—Google—is that search traditionally has not been very easy to differentiate. DuckDuckGo tried it eight or ten years ago now with cloaked searching, secret searching. Everybody else immediately copied it. And so today it’s not like social network sites or buy/sell sites on the digital market in which each of them has its own features and they’re quite distinctive from one another. That’s simply not the case with search.

Most people regard search as search—kind of like buying eggs. Some are just better at it than others. And uniformly the consensus position is that Google Search, because of the size of its files index, is better than its rivals are.

The big explosive factor in the Google Search case was the dramatic rise of Gen AI—artificial intelligence programs, large language models, and other technological innovations that at present have kind of an uncertain future in terms of their collaboration with search. Clearly AI aids search, and all the major search engines have incorporated AI features. Clearly some AI engines are developing search engines of their own, like Perplexity. But right now that market is in such an early stage, it’s very hard to say what it’s going to look like or who’s going to come out on top.

And the most important thing that Judge Mehta warned about was that, at the time of the trial—the trial was a year ago, August—there were no witnesses who were specialists in AI. It was barely mentioned as a factor, and the court as well as the parties very largely disregarded it. And then in a year’s time: Judge Mehta found liability in that first case from 2023—sorry, 2024—then the remedy was litigated for another year, including new testimony, new expert reports, and so on. And it was only during that second phase that suddenly AI comes up as this major wild card in search engine technology, with a lot of uncertainty.

There’s a lot of people out there that are absolutely sure that they know what’s going to happen. I’m certainly not one of them, and I suspect most of them are wrong. Judge Mehta was looking for evidence in the record, not just what somebody was saying in an op-ed piece or something, but he couldn’t find very much about the future with respect to AI.

So the wisdom of the remedy was that it created these licensees as head-to-head controllers of the full Google Search database as of the instant that license took effect. But from that point on they are expected to develop their own features, their own search features, their own AI features, and do their own keeping up with the very rapidly changing database on their own.

And so the hope is that—you wouldn’t need six of them to succeed—but if you had three or four who succeeded, and each became rival and somewhat differentiated search engines, we would have a much more competitive environment than the one we have today.

The other thing that Judge Mehta was concerned about was that if you over-share, then you end up creating nothing more than a regulated utility with competing agents at the back end. That is, what you don’t want to have is a situation where Google provides all of the infrastructure—the entire database and everything else—and the agents who become licensees do nothing more than market those results to their various customers.

So it would look kind of like the arrangement between the airlines and the travel agents, right? You don’t increase airline competition by increasing the number of travel agents because they don’t really have much of any control over airline competition. You’re just increasing last-mile competition by increasing the number of users at that end.

So the Mehta decision was intended to force a certain amount of competitive infrastructure to be shared.

I think I’ve covered all of that.

Okay. I want to say a few words about this remedy, which does not have much of a history as a remedy. It has a great deal as a structure—a set of structures—that have been voluntarily adopted. And that is one way we can think of the problem of dominant digital networks.

The solution for dominant digital networks is not breaking up the networks but rather diversifying and diluting their control. And we’ve got some pretty good examples of that. The telecommunications network is a big one, right? The telecommunications network is arguably the largest network in the world. When it works optimally it means that pretty much everybody can talk to everybody else. However, it is operated very, very collaboratively and in different technologies.

So cellular, for example, has several firms that sell those technologies in competition with one another. And so the thinking here is that if we can create a situation for, say, Amazon—maybe even Apple or Google—where we preserve the network advantages by leaving the network itself intact (that would have ruined the AT&T settlement if we broke up the network and created a bunch of pieces that could no longer talk to each other), but if we can figure out ways to keep the network intact, but decentralize its control, and perhaps even restore some of that control to competitive entities, then we can get the simultaneous advantages of a single network and competition.

We’ve achieved that to a certain extent in things like cellular communications, where today mergers between cellular carriers are covered. Price fixing is not as common, but when it does occur, we’ve had some cases in text messaging. But we’ve got a situation where we preserve the network, but we treat individual operators within that network as entities who are in competition with one another and are fully reachable under Section 1 of the Sherman Act.

There’s a little list of cases—you may not be familiar with them—but both Terminal Railroad and Chicago Board of Trade are old cases. The Terminal Railroad case involved an agreement among several owners of railroad processing, loading, and cargo transfer entities, as well as two bridges across the Mississippi River. The whole consortium was controlled by this guy Jay Gould who was trying to monopolize the railroad industry. It was in fact a Missouri corporation, and the shareholders were the bridge companies, the warehouse company, and other freight handling companies, and they basically cartelized that market and created a bottleneck so that they could control all of the freight passing from east of the Mississippi to west of the Mississippi or vice versa.

The Chicago Board of Trade case involved an Illinois corporation whose shareholders were individual traders who bought and sold commodities futures. And the court had no difficulty treating that as an agreement among several people even though the defendants were all shareholders in a common firm.

Fashion Originators Guild; Associated Press. Associated Press was a concerted refusal-to-deal case. This was in the early days of wire services in which newspapers would share stories electronically, transmitted by telegraph. They created a corporation incorporated under New York law, and under their rules they shared stories at very low cost with one another. So it effectively enabled a newspaper in one city to report on events in different cities even though it didn’t have an actual physical reporter there. But then they had very discriminatory membership rules which excluded non-members, and the Supreme Court struck those down under Section 1 of the Sherman Act.

The NCAA cases do a version of the same thing. The NCAA is a single organization. Its members are colleges via their sports programs—football, basketball, baseball, and others. And their rules are fully reachable as Sherman Section 1 collaborations. And as a result, the NCAA has lost several antitrust cases governing both agreements about television broadcast contracts and more recently compensation for student athletes.

So I think what we need to start paying some attention to is: rather than breaking up networks, we need to think more about breaking up the people who control networks, particularly in those situations where they have competing business interests.

There are some obvious choices. Amazon is one. More than 60% of Amazon’s sales are made through third-party merchants. Many of them are quite small; some of them are larger. But if we granted more authority to those merchants—administered under Section 1 of the Sherman Act—to make rules about things like most-favored-nation agreements or other kinds of possibly anti-competitive clauses…

Now, one thing people don’t like about these kinds of solutions is that they are not calculated to make networks smaller. Indeed, to the extent a competitively operated network brings prices closer to cost, they may in fact make those networks larger. But the important thing is that they would be larger within a much more competitive environment.

Okay. I think I’ve covered most of the Google Search remedy case. The judge issued the final remedial order about a week ago. It included AI in the order. And now kind of the next shoe to drop is: the remedy appointed a committee to oversee developments. It’s being appealed—certainly being appealed by Google—and very likely by the Justice Department. I haven’t heard about the Justice Department, but Google is likely to appeal it.

But if it survives—and I give it a very good chance of survival—I think it’s a very well-reasoned opinion. It’s quite creative. It comes up with a solution that does not break anything up, which would have been extremely damaging, but nevertheless has at least a promise of providing real competition the way I think competition can be provided with digital technologies, and that is through non-exclusive licensing arrangements.

So at any rate: once this group of technically sufficient firms—could be firms that have knowledge and capabilities to provide infrastructure in search—would certainly include some that already have search engines like Microsoft or DuckDuckGo—each of them will get a single-shot snapshot. The database grows every day. But on some fixed day each of them will get a copy. “Copy” is a funny word here. I don’t know how you copy 400 billion pages, but that’s their problem, not mine. Each of them will get a copy of the search files database.

And at that point Google’s obligation with respect to them is pretty much over, and they will be expected to go on and build on that database in two ways. One is by updating it continuously, which has to be done. And secondly by innovating in different directions reflecting their own commitments to product differentiation, use of AI tools, or whatever.

And so if that succeeds—and it’s not too ruined by subsequent judicial proceedings—one would hope to see a more competitive search market environment.

Although I have to tell you that from the consumer side it’s a whole lot less clear to me that search is not working. A vast majority of customers seem to be—if you look at statistical data—pretty happy with search. There’s a little complaint about bias, but not very much. The real complainers about search are competing implementers who always find it very difficult.

Okay. And I was asked to make a couple of comments about regulatory alternatives.

Number one: I’m not in favor of statutes like the DMA that target purely digital markets for adverse treatment. And the main reason is that I think we’re targeting the best-performing part of the microeconomy for hostile intervention.

At least within the U.S., the growth rate of digital markets has been multiples—currently three and a half or so—three and a half times faster than the old economy. More new firms are being formed there. It’s growing much faster than the old economy. And so every year an increasing percentage of total commerce comes out of digital commerce rather than old commerce markets.

That’s not a good place to look for heroic antitrust intervention. Historically, when we target markets for antitrust intervention, we’re generally looking for stagnant markets: very little has been going on; hasn’t been any new entry activity for years; stable market shares; and as a result, frequently widespread collusion. So you think of cement—it always gets a bad rap for this, and probably deservedly so. But whatever set of criteria you come up with for identifying what the targets for aggressive antitrust enforcement should be, it’s very, very rare that the digital economy would meet that set of standards.

Now one important thing is that once you look beyond antitrust there might be a host of other problems. And this is something that particularly the neo-Brandeisians in America are always pointing out: what about sexual abuses, what about taking advantage of children, what about violations of privacy? There’s all kinds of problems. Which of those depend on large size or large market concentration? I don’t know. The point is: they are not antitrust problems. They’re problems that need solutions.

And frequently, at least intuitively, the problems are more likely to arise in markets with a large number of small firms than in a market with a small number of large ones. But if you stick to the goals of antitrust—which are high output, low prices, and unrestrained innovation, kind of the three-legged stool of antitrust—e-commerce does not give you a very good set of excuses for saying, “Hey, we ought to be passing a bunch of statutes that target digital markets for adverse treatment.”

We haven’t passed one yet in the U.S. We have one under consideration, the AICOA—American Online Choice Act—which I don’t think is going to pass anymore. It’s been losing support. It had bipartisan support when it first was promoted in the early years of the Biden administration, but I don’t think it’s going to pass anymore.

So we tend today to use ordinary Sherman and Clayton Act rules for evaluating digital markets. They do require some rethinking of fundamentals, different approaches to things like market power and so on, but it’s still essentially the same set of tools. And for that reason I’m equally skeptical about the use of large amounts of ex ante regulation as opposed to ex post adjudication.

Now that can be simply a plain old-fashioned common law bias. I mean, I am from a common law jurisdiction, as England is. We tend to apply the common law to things after the fact and minimize the amount of anticipatory regulation. I think that approach works well for the digital economies because first of all there are so many things happening. It’s difficult to anticipate those kinds of things with ex ante regulation.

To me that suggests that when things move so fast that antitrust is not working, that really suggests you need to rethink whether antitrust is the appropriate tool for that particular market.

I do agree with the merits of the Google Search decision: that it was anti-competitive for Google to pay all this money for exclusive use of a default. But you know, that is just plain old-fashioned cartel law. That’s all it was: Google is paying Apple to make my search engine the default search engine on Apple devices. That could have been law in the 1920s. So it was no major shift in U.S. law to come up with that. There’s certainly variations, like how do you treat the fact that it’s only defaults rather than absolute requirements, but it really doesn’t require a dramatic change.

And as a result I tend to be somewhat skeptical of overregulating, because so much of regulation has amounted to suppression of innovation, knocking down of important new ideas simply based on some fear—not yet realized—about what might happen in the future.

So at least for my purposes, I’m going to toe the line.

Anyway, I think I’ve covered what I want to say. Sorry for its miscellaneous nature. I hope it’s been worthwhile for you, and thank you once again for inviting me. I appreciate it.

Pilar Paredes D. (traductora)