AI Hallucination and disciplinary sanctions
- Algopolio
- Dec 4, 2025
- 3 min read
Source: Il Sole 24 Ore – “AI Hallucination: the lawyer risks disciplinary sanctions,” December 3, 2025
When AI enters the courtroom: the rise of a new professional risk
The Sole 24 Ore article reports a case marking a turning point in the relationship between legal professions and artificial intelligence tools.A lawyer, while preparing a legal brief, relied on AI-generated content without verifying its accuracy, introducing incorrect or irrelevant information into the proceedings. The Administrative Court of Lombardy identified a potential breach of the duty of professional integrity and referred the matter to the Bar Association for possible disciplinary action.The message is clear: AI cannot replace human oversight in high-responsibility decision-making environments.
Algorithmic hallucinations as a threat to legal reliability
The phenomenon of hallucinations — plausible but false answers generated by large language models — is not an isolated technical flaw but an epistemic risk.Legal professions rely on precision, rigorous reasoning and correct sourcing; introducing erroneous information can jeopardize the entire proceeding.The article notes that in such cases AI does not simply fail as a tool — it becomes a source of distortion of judicial reasoning, inserting errors that only human review can detect.
The boundary between digital assistance and improper delegation becomes essential in redefining professional obligations.
The central role of the lawyer: vigilance, not delegation
The court emphasized a core principle: the lawyer remains fully responsible for the accuracy of the documents they sign.Relying on technology does not exempt from responsibility — it actually makes it more complex.The professional must:
verify every element generated by AI,
maintain independent critical judgment,
ensure that the document meets ethical and legal standards.
AI usage is not forbidden, but it requires a new model of diligence, where the ability to distinguish between useful suggestions and algorithmic errors becomes part of professional competence.
A cultural issue as much as a technical one
The problem highlighted by the article is not merely the reliability of AI but the risk of a cultural shift: treating technology as an authoritative source of truth.In the legal field, this shift is particularly dangerous:
it weakens the relationship between interpretation and judgment;
it reduces personal responsibility;
it risks introducing automated reasoning into a domain grounded in nuance and human evaluation.
Legal argumentation cannot be compressed into a generative output: it requires human critical thinking.
Evolving professional rules in the AI era
This case inevitably prompts a broader reflection: which professional rules are needed in an era where AI is integrated into daily legal work?Bar associations will need to define new standards, including:
transparency in the use of generative tools,
obligation to verify AI-produced content,
clear responsibility for errors introduced by AI,
limits on using unverifiable or synthetic content.
Law as a discipline must adapt to technological change without losing its essence.
What this means for Algopolio: protecting citizens from AI-influenced decisions
This case shows how deeply AI can affect fundamental rights and procedural fairness.An algorithmic error in a legal document can alter decisions that shape someone’s life, reputation or freedom.
Algopolio intervenes precisely where technology collides with rights and accountability:
supporting citizens harmed by improper or negligent use of AI;
helping professionals and users understand risks, limits and regulatory implications;
promoting transparency and responsibility in digital tools;
opposing every form of opacity that could endanger fairness in decision-making.
Anyone who believes they have been harmed by AI-generated errors — in a legal, administrative or professional context — can contact Algopolio for guidance, analysis and concrete protection.Because AI may assist us — but it must never compromise justice.


Comments