The next leap: Algorithmic war
- Algopolio
- Dec 4, 2025
- 2 min read
Source: Il Sole 24 Ore – “Algorithmic war: an evolutionary leap from laboratory to action”, 3 December 2025
The entry of AI into the defence domain: the threshold that has been crossed
The article highlights a historic turning point: the U.S. Department of Defense has awarded major contracts to three leading AI companies — Anthropic, Google and xAI — to integrate advanced artificial intelligence systems into military processes. This is no longer experimental technology: it is the official deployment of AI within command structures, battlefield analysis and operational decision-making. This transition marks the collapse of the barrier that traditionally separated research from action, opening ethical and geopolitical scenarios that no democracy can afford to ignore.
From digital assistance to operational agency
Until now, AI has been perceived largely as a decision-support tool, capable of analysing data and offering strategic guidance. The leap described in the article is qualitative: AI becomes an agent, not merely an instrument.It begins to:
interpret complex operational data, sequence decisions, adapt quickly to changing situations.
Speed, however, becomes a risk when it outpaces human deliberation. An AI system that reacts without understanding meaning may turn efficiency into a moral and strategic vulnerability.
The “human in the loop” doctrine: a fragile line under pressure
Benanti recalls that U.S. doctrine prohibits granting algorithms full autonomy in the use of lethal force without human supervision. But global competition — especially with China — and the appeal of instantaneous reaction times push towards broader integration, where supervision risks becoming symbolic rather than substantive.
The danger is concrete: AI could undermine the human ability to evaluate consequences, feedback loops might intensify escalation beyond human control,the separation between “technical action” and “armed intervention” could collapse.
The dual face of AI in warfare: precision or disinformation?
The article warns of another risk: AI’s capacity to generate synthetic disinformation — altering strategic perceptions or manipulating public opinion.The same infrastructure touted for “surgical precision” can be used to fabricate scenarios, create false intelligence or deceive enemy systems.This makes AI not only a weapon of impact but also a weapon of narrative, capable of reshaping the cognitive dimension of conflict.
An ecosystem of defence without ethical safeguards?
Integrating AI into defence systems requires a redefinition of responsibility. Who is accountable for an autonomous agent’s decisions? Which legal framework governs algorithmic actions executed faster than human oversight?
Without strong ethical safeguards, we risk delegating global security to systems operating according to criteria that are non-human: optimisation, statistical confidence, error minimisation. But war is not a mathematical equation.It concerns lives, political stability and the equilibrium of entire regions.
What this means for Algopolio: monitoring digital power before it becomes irreversible
Benanti’s reflection aligns with one of Algopolio’s core concerns: the urgent need to govern technological power before technological power governs us.Algopolio works to: analyse the political and ethical impact of advanced automation, expose risks arising from opaque or unaccountable algorithmic systems, support citizens, researchers and institutions in understanding AI,promote democratic oversight over technologies capable of altering societal structures.
Anyone who feels exposed to the consequences of AI — whether through opaque decision systems, security issues or lack of accountability — can turn to Algopolio for guidance, protection and informed analysis. Technology must serve humanity, not surpass it.


Comments