Algorithmic Price Discrimination as Exploitative Abuse under Article 102 TFEU | CeCo

https://centrocompetencia.com/wp-content/themes/Ceco

Abuso de posición dominante, Caso Banred, Revisión judicial, Discriminación de precios

Algorithmic Price Discrimination as Exploitative Abuse under Article 102 TFEU

29.10.2025
Miroslava Marinova (EN) Dr. Miroslava (Mira) Marinova is experienced professional, having worked for more than 15 years in the area of competition law in both the public and private sector. Currently, she teaches International Competition Law, Intellectual Property Law and EU Law at the University of East London (UEL). Before joining UEL, she was part of the Competition law Enforcement team at Ofgem, the UK’s Energy Regulator. Mira is a Senior Fellow and leads the UK initiative of the Competition and Innovation Lab at George Washington University. She also serves as a Visiting Lecturer at King’s College London.

This note is published as part of an agreement between CeCo and ASCOLA Latam. Both organizations agreed to cover some of the articles discussed at the 2025 ASCOLA annual conference, held in Chicago. This particular note is a summary prepared by Miroslava Marinova about her presentation at the aforementioned conference. To see CeCo’s translation of this note into spanish, click here.

 

 

 

Doctor Miroslava Marinova discusses how algorithmic price discrimination should be dealt with in the European jurisdiction.

___________________________________________________________________

The growing use of algorithms, including those powered by Artificial Intelligence (AI) to optimize prices, has generated significant debate about their benefits and potential adverse effects on competition and consumers. Two key issues emerge in this debate: the use of algorithms that facilitate personalized pricing based on individual consumer data, a practice that results in personalized price discrimination (referred to as algorithmic price discrimination), and their potential to facilitate collusion. While algorithmic collusion has received considerable attention in both academic and policy discussions, the potential harm arising from algorithmic price discrimination remains comparatively underexplored. Price discrimination refers to the practice of charging different customers different prices for the same goods or services, despite the similar cost of production, in order to maximize the overall profits. This typically depends on differences in consumers’ price sensitivity or willingness to pay, allowing firms to extract more value from those willing to pay more, while also capturing business from more price-sensitive customers. Historically, personalized price discrimination was rarely implemented in practice due to the difficulty in identifying customers’ preferences and willingness to pay. In recent years, advances in digital technologies have significantly expanded the scope of price discrimination by enabling the collection and analysis of detailed, individual-level data. This has allowed firms to move beyond traditional segmentation based on broad demographic categories and instead construct highly granular consumer profiles. Recent research indicates that companies are now increasingly use AI-driven algorithms that can process complex data to personalize and adapt pricing strategies for each consumer to a far greater degree than was ever possible before. As a result, the same product may be offered to different consumers at different prices under identical market conditions, often without consumers’ knowledge, raising concerns about a lack of justification.  The purpose of our study is to directly examine whether algorithmic price discrimination, when implemented by dominant companies, constitutes an abuse of a dominant position under Article 102 of the Treaty on the Functioning of the European Union (TFEU).

Why do companies adopt price discrimination?

Price discrimination schemes are generally viewed as pro-competitive, as they expand sales by lowering prices when a company operates in a competitive market. Both classic economic models and modern analyses support the conclusion that by charging higher prices to customers willing to pay more than those who value the product less, a company can extract more profit than if it charged all customers a uniform price. Economists traditionally categorize price discrimination into three types. First-degree price discrimination, or perfect price discrimination, occurs when a seller charges each consumer the maximum price they is willing to pay for a product. This approach allows the seller to capture all consumer surplus and maximize profits by extracting incremental value from every transaction. However, three conditions must be met for this to succeed. Firstly, the seller must have some degree of market power; otherwise, it wouldn’t be possible for any consumer to be charged more than the competitive price. Secondly, the customer should be restricted in reselling the product at a higher price to a higher valuation customer. For example, arbitrage can be restricted if a company offers products with warranties that are only valid for the initial buyer, which means that if the product is resold, the new buyer may incur additional costs. Thirdly, the seller must have complete knowledge of their customer’s willingness to pay in order to charge those who are willing to pay more the highest price, and those who value the good less. These stringent requirements make it difficult to achieve first-degree price discrimination in practice. Second-degree price discrimination involves more complex pricing schemes that maximize the seller’s profits by charging different prices depending on the quantity sold (known as quantity or volume discounts), resulting in nonlinear price fluctuations. One such nonlinear pricing scheme is a two-part tariff. Real-life examples include student travel cards, telecom subscriptions, etc. Third-degree price discrimination involves segmenting consumers based on observable characteristics, such as age or location, and charging different prices accordingly. By adjusting prices to match the demand elasticity of each segment, firms can optimize revenue while potentially offering lower prices to more price-sensitive consumers. For instance, students or senior citizens may benefit from discounts based on their presumed financial vulnerability. This form of price discrimination is widely employed because it is both practical and profitable, allowing firms to optimize revenue while potentially improving access to goods or services for certain segments. Traditionally, firms segmented consumers into broad groups based on observable characteristics. In contrast, algorithmic tools utilize large volumes of behavioral and transactional data to infer an individual consumer’s likely willingness to pay. Algorithmic price discrimination aligns most closely with first-degree price discrimination. While the typology of price discrimination highlights how firms adapt pricing strategies based on consumer characteristics and behaviour, the broader question remains: under what conditions do these practices enhance or harm overall consumer welfare?

What are the welfare effects of price discrimination?

The welfare effects of price discrimination have been extensively studied in the economic literature. Price discrimination is primarily employed by firms to maximise profits, but its welfare effect depends on market structure and consumer response. Under certain conditions, particularly when price discrimination leads to output expansion by allowing access to price-sensitive consumers, it may enhance total welfare. In such cases, differential pricing can improve allocative efficiency and reduce prices for some consumers. However, it does not necessarily increase consumer welfare. For this reason, price discrimination that is efficient from a total welfare standpoint may still be problematic from a consumer welfare perspective, and as such, the overall effect of price discrimination on output and welfare is ambiguous and may vary case by case. One reason for this is that consumer preferences concerning price discrimination would be expected to be self-correcting in a competitive market, allowing customers to substitute for firms that they observe using discriminatory pricing practices, which they consider unfair. However, this assumes a functioning competitive market. Where such conditions are absent, particularly in cases involving dominant firms, price discrimination may be seen as abusive under competition law. Crucially, the success and welfare effects of price discrimination depend not only on the firm’s ability to collect and utilise information about consumers’ willingness to pay, but also on its ability to prevent arbitrage. Traditionally, the effectiveness of price discrimination was limited by the threat of arbitrage, as consumers could often resell goods across market segments, undermining firms’ attempts to charge different prices. While much of the academic literature on arbitrage and market segmentation focuses on traditional markets, emerging research is addressing how digital technologies enable firms to prevent arbitrage in digital contexts.  As a result, resale is harder or impossible, making discrimination more sustainable and less detectable. Digital technologies have significantly enhanced the ability of firms to implement and sustain price discrimination strategies that were previously challenging to maintain, making their welfare effects more complex and context-dependent.

Is AI-driven pricing making personalised price discrimination possible?

While first-degree price discrimination, or perfect price discrimination, has traditionally been viewed as impossible in practice due to information limitations and difficulties in preventing arbitrage, the development of digital technologies and algorithmic tools over the past decade has significantly changed the traditional approaches to personalized pricing. Online platforms now routinely collect and analyse personal data ranging from demographic information (e.g. age, gender, education) to dynamic behavioural patterns. This includes data on past online transactions, geographic location, browsing history, all of which are routinely gathered from users directly, through cookies, or acquired from third-party data vendors. Modern algorithms have the capacity to analyse vast volumes of data, constructing detailed profiles of individual consumers. Through techniques like data mining, online platforms can analyse pieces of information to develop comprehensive consumer profiles, enabling them to predict and influence consumer behaviour and reactions to changes in price or special deals online. A core barrier to implementing personalised pricing, particularly the second condition identified above, has historically been the threat of arbitrage: the risk that consumers who face low prices may resell to others. However, a growing body of evidence suggests that in digital markets, resale can be effectively constrained through both technical and structural means. These include privacy-intrusive mechanisms (e.g., non-transferable identity tags, account verification, digital rights management (DRM) that serve to prevent resale and sharing. However, while these developments lay the technical groundwork for personalized price discrimination, they overlook a critical and often underexplored factor: consumers’ perceptions of fairness.

Algorithmic price discrimination and consumers’ perception of fairness

Consumers’ views of fairness introduce a novel and crucial perspective into the discussion of personalized pricing. Price discrimination based on specific consumer characteristics is generally considered fair when the parameters used to set different prices are transparent and easily understood by the consumer. Empirical evidence suggests that consumers are more accepting of personalized pricing when it is accompanied by clear and transparent communication. However, when consumers are not aware of the parameters taken into consideration when setting prices, they may consider these prices unfair, even if the charged price maximizes the overall consumer’s welfare. If consumers consider a price unfair, this directly impacts their purchasing decisions, reducing trust in the market, and creating psychological discomfort that diminishes the overall value of transactions. Studies show that consumers react negatively to pricing practices that lack a clear objective justification, particularly when personal data is involved. This aligns with evidence from behavioural economics and cognitive science, which shows that consumers have negative attitudes toward algorithmic price discrimination mainly because they perceive it as unfair or non-transparent. Recent studies have shown that algorithmic pricing creates feelings of betrayal, reduces perceived fairness, and erodes consumer trust. The Amazon case illustrates this as well: in 2000, Amazon experimented with price discrimination by charging different prices for the same DVDs based on customers’ purchasing behaviour. When consumers discovered that the company was using data from their purchasing behaviour to charge different prices for online DVD sales, it led to a public negative reaction. This negative reaction highlighted strong consumer resistance to non-transparent personalized pricing strategies. Based on this evidence, some authors have suggested that in digital markets, many firms will refrain from employing personalized pricing even where it is technically possible because they are concerned about potential damage to their brand reputation and loss of consumer trust. However, this self-restraint tends to break down when a dominant firm employs algorithmic price discrimination. Under such conditions, consumers lack meaningful alternatives. This is where algorithmic price discrimination transitions from a competitive strategy to a potential source of consumer harm. This brings into focus whether and how existing legal instruments, specifically Article 102 TFEU, can address exploitative or discriminatory pricing practices that emerge from algorithmic personalisation.

Can Article 102 TFEU be applied to AI-driven personalized pricing?

Article 102 TFEU prohibits the abuse of a dominant position and serves as the legal basis for addressing various forms of anti-competitive conduct, including price discrimination. Specifically, price discrimination is covered under Article 102(c) TFEU, which explicitly prohibits applying dissimilar conditions to equivalent transactions with other trading parties, thereby placing them at a competitive disadvantage.  Case law provides examples of discriminatory practices that infringe Article 102 TFEU. The MEO judgment clarified that, first, non-vertically integrated companies usually lack an interest in harming competition downstream, and second, that differential treatment would only be considered abusive if it could distort competition, considering all the relevant circumstances. Notably, MEO’s requirement of an anticompetitive effect as a condition of abuse limits the applicability of Article 102(c) to algorithmic price discrimination scenarios where harm arises not from competitive disadvantage but from perceived unfair exploitation of consumers. As outlined above, the possible harm posed by algorithmic price discrimination is the extraction of surplus from final consumers through non-transparent and personalized pricing strategies, making it more akin to exploitation under Article 102(a) TFEU.

Article 102(a) TFEU prohibits dominant firms from “directly or indirectly imposing unfair purchase or selling prices….,’ which is usually understood as prices that significantly exceed what would prevail under competitive conditions. The provision does not explicitly refer to excessive prices, but it is generally accepted that it can be used to prevent a dominant company from imposing excessive prices if they are unfair. The provision can also be used to prevent unfair terms and conditions imposed by a dominant firm. The first case in which the CJEU set up a framework to test excessive pricing was United Brands, in which the Court specified that a price is excessive if it has no reasonable relation to the economic value of the product. According to the Court’s approach, a price is considered to be excessive if: (i) the difference between the cost incurred and the price charged for a product or service is found to be excessive; and (ii) the price is unfair in itself or when compared with competing products. Since then, this (theoretical) test became the leading authority for excessive pricing abuses, known as ‘The United Brands two-fold test’. However, the test has been heavily criticised in the academic literature on the ground that the Court failed to provide clarity on when the amount of profit margin is excessive and when the price is unfair.

The application of the second part of the test, which requires a determination on whether the price is ‘unfair’, should be the focus here. A recent paper evaluating the development of the legal test in excessive pricing cases showed that the second element of the two-fold United Brands test, namely the ‘in itself’ test and the ‘competing products’ test, are separate tests measuring different aspects of unfairness which actually address the same question, namely whether the price is excessive in relation to the economic value of the product/service. Therefore, after the establishment of the excessiveness of the price, the decisive question is whether the price bears a reasonable relation to the economic value, i.e if the price increase is not justified by cost increase and there are no non-cost related factors such as consumer preferences, which bring added value to the product, hence the customers’ willingness to pay a premium price. Therefore, after the establishment of the excessiveness of the price, the decisive question is whether the price bears a reasonable relation to the economic value. The analysis showed that a price can be considered excessive only if the price increase is not justified and there are no non-cost-related factors, such as consumer preferences, that bring added value to the product, thereby influencing the customers’ willingness to pay a premium price. Therefore, an excessive price that is not justified is also unfair. These cases confirm that unfairness is a core element of the legal test. This is particularly important in the context of personalised pricing, where differentiation is not based on product quality or cost justifications, but on the analysis of extensive consumer data.

Are we ready to apply Article 102(a) of the TFEU to AI-driven, personalized price discrimination?

Algorithmic price discrimination, enabled by the growing use of AI and big data, allows companies to personalize prices for individual consumers based on detailed behavioural data. This practice results in greater price differentiation, especially when consumers revisit websites multiple times. While such practices aim to maximize profits, they often lead to perceptions of unfairness among consumers, which is a crucial factor in determining their competitive and legal implications. Consumers’ perceptions of fairness are central to this analysis. If algorithmic price discrimination is perceived as unfair, it becomes part of consumers’ preferences, influencing their purchasing decisions and potentially leading them to turn to platforms that guarantee uniform pricing. This dynamic shows the self-regulating nature of competition in competitive markets, where consumers can switch to alternative suppliers. However, this self-correcting mechanism fails in markets dominated by firms with significant market power. In such cases, algorithmic price discrimination is not constrained by competitive forces and instead serves as a tool for dominant firms to extract additional profits by reducing consumer surplus.

This dynamic underscores that algorithmic price discrimination is controlled by competition and becomes a competition law issue only when market power is present. The underlying principle is that pricing practices deemed fair are those that could arise under normal competitive conditions. When price discrimination occurs solely due to the existence of market power, it departs from what would be considered fair in a competitive environment. The potential impact of this algorithmic pricing is that it can lead to unfair treatment of consumers. It can also lower consumer welfare as it represents an extreme shift in welfare from consumers to producers. Even when algorithmic pricing does not necessarily reduce overall welfare, it raises significant concerns about fairness and potential harm to consumers. While the academic literature provides mixed conclusions, there are valid concerns that these practices could lead to unfair treatment of consumers and lower consumer welfare.

While algorithmic price discrimination presents unique challenges for competition law, its impact largely depends on the market structure. In competitive markets, consumers can mitigate harm by switching suppliers, though concerns about fairness and data privacy persist. However, in markets where a dominant firm controls pricing, algorithmic price discrimination becomes far more problematic, as consumers have no alternatives, leading to a reduction in consumer welfare and potential exploitative abuse under Article 102 TFEU. However, until actual cases are advanced, uncertainty remains. Further research is needed to develop robust methods for assessing and proving consumer perceptions of unfairness for Article 102(a) analysis.

* In accordance with the ASCOLA Transparency and Disclosure Declaration, the author has nothing to disclose. 

** The author notes that this note is based on her previous paper written with Christian Bergqvist: «Unlocking Manufacturer Utopia: AI’s Role in Perfect Price Discrimination» paper peer reviewed and presented at the 2025 ASCOLA conference, Chicago.