AI hallucinations and connected risks
- Algopolio
- Dec 18, 2025
- 3 min read
Source: Il Sole 24 Ore – “‘AI Hallucination’: a Lawyer Faces the Risk of Disciplinary Sanctions”, 3 December 2025
When AI enters courtrooms: the emergence of a new professional risk
The Il Sole 24 Ore article describes a case that marks a turning point in the relationship between legal professions and artificial intelligence tools. A lawyer, in drafting a defence brief, relied on AI-generated content without verifying its accuracy, introducing inaccurate or irrelevant information into the proceedings. The Regional Administrative Court (TAR) of Lombardy identified a potential breach of the duty of loyalty and referred the matter to the Bar Association for possible disciplinary action. The message is clear: AI cannot replace human oversight in high-responsibility decision-making contexts.
Algorithmic hallucinations as a threat to the quality of law
The phenomenon of hallucinations—plausible but false responses generated by language models—is not an isolated technical issue, but an epistemic risk. In legal professions, which rely on precision, argumentative rigor and the correctness of sources, the introduction of erroneous information can compromise an entire proceeding. The article stresses that, in such circumstances, AI not only fails as a tool, but becomes a distorting factor of judgment, introducing errors that only human control can prevent.
The boundary between digital assistance and improper delegation thus becomes crucial, redefining professional duties.
The central role of the lawyer: vigilance, not delegation
The Court reiterates a fundamental principle: the lawyer remains the guarantor of the correctness of the acts they sign. Reliance on technology does not exempt professionals from responsibility; rather, it makes that responsibility more complex. The professional must:
verify every element provided by AI,
maintain critical judgment,
ensure that the document complies with ethical and legal standards.
The use of AI is not prohibited, but it requires a new model of diligence, in which the ability to distinguish between a useful suggestion and an algorithmic error becomes part of professional competence.
A cultural issue as much as a technical one
The problem highlighted by the article concerns not only the reliability of AI, but the risk of a broader cultural shift: the tendency to treat technology as an authoritative source of truth. In the legal domain, this shift is particularly dangerous:
it alters the relationship between interpretation and the exercise of judgment;
it reduces personal responsibility;
it may introduce automatisms into legal reasoning that erode the complexity of law.
The quality of legal argumentation cannot be compressed into a generative output; it depends on the human exercise of critical thinking.
The evolution of professional rules in the age of AI
The case inevitably prompts a broader reflection: what disciplinary rules are needed in an era in which AI is embedded in the daily activities of law firms? Professional bodies will need to define new standards, including:
transparency in the use of generative tools,
an obligation to verify information,
clear responsibility for errors introduced by AI,
limits on the use of unverifiable content.
Law, as a discipline, must adapt to technology without being distorted by it.
What this means for Algopolio: protecting citizens from AI-influenced decisions
This case demonstrates how profoundly AI can affect fundamental rights and guarantees. An algorithmic error in a judicial act can alter decisions that impact a person’s life, reputation and freedom.
Algopolio operates precisely where technology collides with rights and responsibility by:
supporting citizens who have suffered harm from the improper use of AI;
helping professionals and users understand risks, limits and regulatory implications;
promoting transparency and accountability in the use of digital tools;
challenging any opacity that may endanger the fairness of decisions.
Anyone who believes they have been harmed by AI-generated errors—whether in a legal, administrative or professional context—can turn to Algopolio for protection, guidance and technical assessment. Because AI may assist, but it must never compromise justice.


Comments