CeCo | On How to Incorporate AI into Antitrust Agencies
Newsletter

On How to Incorporate AI into the Workflow of Antitrust Agencies

28.06.2023
CeCo Chile
8 minutos
Alejandra Palacios P. Afiliada Senior de la Escuela de Gobierno de la Universidad del Sur de California (USC Price School). De septiembre de 2013 a septiembre de 2021 fue Presidenta de la autoridad de competencia en México (Comisión Federal de Competencia Económica, COFECE).

Discussions on the “where’s” and “how’s” of the development of computational skills among antitrust are taking shape, and the rise of Artificial Intelligence (AI) has added a new dynamic to these talks.

AI is a tool that could be used for the automation of antitrust procedures and the improvement of antitrust analysis. It can be used to detect, analyze and remedy antitrust breaches, as well as to simplify certain procedures. While I am not a specialist on issues relating to AI, my aim here is to bring ideas to the table on how a Latin American competition agency could integrate this technology into its work. Doing so is a two-pronged task: (i) decide when and how to use AI, and (ii) develop a concrete strategy for its implementation.

So, how have some competition agencies started to use AI[1]?

Reverse-engineering companies’ algorithms

AI mechanisms are being used by market players in the digital space for business strategy purposes that, under certain circumstances, could imply anti-competitive behavior. Firstly, algorithms could be used by retailers to target consumers and charge discriminatory prices for the same product where an individual’s willingness to pay becomes an influencing factor. Another type of discrimination enabled by algorithms is self-preferencing, that is, the practice adopted by online platforms to favor its own products over those of third parties (as developers or other merchants). Finally, AI can facilitate price fixing (collusion) when, for example, the same pricing algorithms is used by different competitors and programmed either to collude, set higher prices or simply maximize profits  (see “The Impact of Algorithms in Competition Law”).

With this in mind, competition agencies are exploring how the algorithms of certain businesses like online search engines and e-commerce platforms work, to assess if such operations could imply an infringement. In essence, it involves the reverse-engineering of companies’ algorithms to better understand how this computational decision-making process works. The goal is to detect and determine if there is an actual anticompetitive behavior and, subsequently, sanction it. Algorithmic price-setting has the potential to create both intentional and unintentional market distortions, making it difficult for regulators to distinguish between legitimate market efficiencies and anti-competitive practices (see “ABA Special 2023: Algorithm collusion”). This is why reverse engineering is so important.

«A concrete example of how Natural Language Processing could have been used is the Mexican collusion investigation into the government bond market»

Market screenings

Machine Learning (ML) could be used for making market screenings, thus helping competition agencies to identify anticompetitive patterns. Some antitrust agencies have already put in place pilot projects based on ML techniques that gather publicly available data from different sources for priority industries, for example, staples such as fuel. This algorithmic screening compares prices between products, observes relevant changes, and monitors whether prices of the same product across different firms rise simultaneously over time. This allows identifying patterns or suspicious behaviors, information that could develop lines of inquiry that might hint at antitrust breaches (see “OCDE: Data screening tools”).

Given AIs ability to process large volumes of information and to identify market anomalies, the power of this technology encompasses key data that nowadays escapes human observation. Thus, special attention has been given by antitrust agencies to the possibility of using ML as a screening tool in public procurement.

Natural Language Processing

Natural Language Processing (NLP) techniques could help with finding illegal intentions when analyzing the documents of a company under investigation, speeding up the process by which agencies scour documents in the case at hand.

A concrete example of how NLP could have been used is the Mexican collusion investigation into the government bond market. This investigation began in 2016 and resulted in a $1.7 million (USD) fine on seven banks and 11 traders for manipulating the selling, buying, and commercialization of 142 government bond transactions.

Traders normally use Bloomberg or Reuters chats to post quotes, negotiate trades, and share market information. In some cases, they have also used them as a medium to coordinate price fixing. In the Mexican case, these chat communications were a key piece of evidence in demonstrating market collusion. However, identifying said chats was not an easy task as the transcripts of these communications ran into millions of pages, making it prohibitively difficult for them to be analyzed manually by the dedicated investigative team.

The Mexican antitrust agency (COFECE) initially issued information requests to nine banks, requesting them to provide the chat conversations that were generated on these platforms, and which met two characteristics: (i) that a trader from one bank had engaged with at least a trader from another bank, and (ii) that the conversation was about the price, or performance of one or more government bonds.

With this first request, banks identified millions of transcripts of chats conversations in their databases. The scale of this information was unmanageable, with an insufficient number of personnel on hand to read, analyze and categorize the contents of those chats. So, an iterative filtering process between the antitrust agency and the banks begun, with more precise new searches, filtering with some specific keywords, so as to limit the number of results. This iterative process had a strong impact on investigation times as after each new search filter lawyers were slow to return with new results. In the end, 40,243 chats were added to the investigation file and analyzed by the competition agency to then select those that could be referred for probable collusive agreements. Out of more than 40,000, only 72 were sanctioned for wrongdoing.

If COFECE had had access to NLP techniques, its investigative ability would have been, without doubt, more potent. In this case, the agency relied on banks’ searching techniques, not on its own. Also, the investigation was based only on exact keyword matches or very simply defined patterns, rather than on more complex linguistic patterns based on the context. Having NLP in this case would perhaps have uncovered more wrongdoing and boosted sanctions.

Regarding an implementation strategy for a project of this magnitude, the first concern is to concretely define the processes to start on and to establish a priority order. Arguably, the most important factor for success is choosing the right process, or processes, that can be automated with AI. For each of these endeavors, sufficiently large quantities of usable data must be available and suitable for the desired goal. Next, it should be decided what will be the respective scope when it comes to human and machine decision-making.

Likewise, it is important to develop a strategic plan that considers the required capabilities of both humans and computers. Cost is also a determining factor, weighing up human expertise, capacity to deliver and budget. A strong support from the agency’s leadership team is also essential for a successful implementation of the plan.

Significant attention will be required for building human capital within the antitrust agency, as some degree of in-house expertise in data analysis will be needed. Typically, antitrust agencies hire lawyers and economists, but now thought needs to be put into including data scientists and technology experts into the workforce as they become part of the enforcement team. Building and maintaining adequate in-house expertise will not come easy given the competitive salaries offered by the private sector for workers with such skills.

COFECE in 2014 established a Market Intelligence Unit (MIU) as part of the Mexican agency’s strategy to strengthen its enforcement capabilities with the use of data. At that time, its main tasks consisted of making market screenings and economic analysis in order to detect potential anticompetitive conducts (using mainly analytical algorithms programmed in R or Python), as well as forensic analysis of the digital information gathered during dawn raids. Nowadays, its toolkit has been developed further to include automatized data collection, incorporating a variety of algorithms capable of web scraping, data extraction from different formats, and database integration from both structured and unstructured data origins. Another ongoing project is an in-house public procurement screening tool. The agency mentions having built several major public procurement cases leads using it[2].

From 2014 to 2019, a continuous and substantial investment had been made on the expansion of IT infrastructure, with a focus on broadening IT forensic capabilities, expanding computing capacity, and building cloud-based infrastructure. This initial impetus was curbed by an austerity policy imposed by the federal government, which considers the purchase of new computer equipment as unnecessary and expensive.

I reiterate the importance of support coming from the leadership team of the antitrust agencies, including those responsible for authorizing public spending.

[1] On concrete projects taken by antitrust agencies, see The Adoption of Computational Antitrust by Agencies: 2nd Annual Report at the https://law.stanford.edu/codex-the-stanford-center-for-legal-informatics/computational-antitrust/.

[2] For more details see the Mexican Antitrust Agency update at The Adoption of Computational Antitrust by Agencies: 2nd Annual Report.

Related Posts