top of page

Google Under European Scrutiny: AI Training as the New Antitrust Frontier

Source: Corriere della Sera – “Google in Brussels’ Crosshairs Over AI Training”, 10 December 2025

The European investigation and the core of the conflict

The European Commission has opened a new antitrust investigation into Google, focusing on the use of editorial content for training artificial intelligence models. At the heart of the accusation lies the suspicion that the Mountain View giant has exploited materials produced by publishers and creators without adequate compensation, thereby strengthening its dominant position in the emerging generative AI market. This is not a marginal dispute: it is the point where competition law, copyright and control over information infrastructures converge.

From search to AI: control over the entire value chain

According to Brussels, the risk concerns not only the use of content, but the power structure that results from it. Google is not merely a search engine: it controls access to traffic, hosts content, indexes it and now uses it to fuel artificial intelligence systems that deliver synthetic answers directly to users. In this scenario, publishers and content producers risk becoming invisible suppliers of value, while AI absorbs information, attention and revenue, reshaping the entire economic balance of the sector.

Innovation as a shield

Google’s defence follows a now well-established line: restricting the use of content for AI training would slow innovation and penalise Europe’s technological development. The European Union, however, reverses the argument: without clear rules, innovation risks turning into systemic appropriation, where those who control the infrastructure unilaterally decide how and under what conditions value is redistributed. The question is not whether AI should develop, but who pays the price for its development.

A precedent that goes beyond Google

This investigation does not concern a single company alone. It represents a crucial test of Europe’s ability to intervene when artificial intelligence becomes a tool for power concentration. If model training relies on third-party content without compensation or transparency, the risk is the creation of a closed ecosystem in which a few actors accumulate data, knowledge and informational influence. This marks the transition from traditional antitrust to algorithmic antitrust.

Why this issue concerns citizens

When AI synthesises, reorganises and delivers information, it is not merely “assisting” users—it is mediating reality. If this mediation is controlled by a handful of private actors, without effective counterbalances, the risk is a loss of pluralism, informational autonomy and freedom of choice. What is at stake is not only economic, but democratic.

Algopolio’s role

Algopolio was created precisely to monitor these critical nodes of digital power. The use of content to train AI, the concentration of information infrastructures and the asymmetry between platforms and citizens are central to its mission. Those who produce content, those affected by opaque algorithmic decisions, or those who believe their digital rights have been curtailed need tools for understanding and protection.

Because artificial intelligence is not neutral. And without rules, it risks becoming yet another lever of invisible domination.

 
 
 

Comments


bottom of page