Overview
Summary
On 23 April 2026, I had the honor to deliver an in-person Special Guest Lecture entitled “Artificial Intelligence in Conflict - Reflections from the United Nations Security Council” to undergraduate students, upon the invitation of the Department of International and European Law, of the Law Faculty, of Moldova State University. The lecture was designed to bridge theory and practice, offering students a rare insight into how the events they follow in global news headlines are translated into diplomatic discussions within the United Nations Security Council and the United Nations General Assembly, and how these discussions can gradually evolve into United Nations policies, normative frameworks, and potentially even new international legal instruments. Drawing on my professional experience at the United Nations, the lecture explored both the legal and ethical dimensions of the use of artificial intelligence in armed conflict. It elucidated the diverse and often divergent positions of UN Member States, while also offering expert-level reflections on emerging challenges - ranging from accountability and human control to the increasing role of private actors in shaping military AI capabilities. The session concluded with a structured discussion addressing key frequently asked questions, enabling students to engage critically with one of the most rapidly evolving and consequential issues in contemporary international law and global security.
Key points
- The use of AI in conflict is no longer emerging, it is a frequent reality: One of the most striking findings of the lecture is the speed at which artificial intelligence has moved from innovation to operational deployment in conflict settings. The fact that the United Nations Security Council first formally addressed artificial intelligence only in July 2023, followed shortly thereafter by the United Nations General Assembly in 2024, demonstrates that by that point AI had already become sufficiently widespread to require attention at the highest level of international peace and security governance. This timeline therefore reflects a sobering reality that technology is not waiting for law or diplomacy to catch up, it is already shaping the battlefield.
- The United Nations Member States have rapidly elevated AI to a matter of international peace and security: What began as a technological issue has now been firmly reframed as an international peace and security concern. Security Council debates and events between 2023 and 2025, including high-level briefings and open debates, have consistently highlighted that AI can act as a force multiplier in military operation, can lower the threshold for the use of force, and can compress decision-making time, reducing opportunities for diplomacy and de-escalation. At the same time, experts and UN officials have repeatedly warned that the weaponization of AI may pose existential risks, particularly in scenarios involving loss of human control or misuse by state and non-state actors.
- International law applies and the responsibility for its violation remains human: A critical clarification emerging from both legal analysis and UN debates is that international law stretches to govern the use of artificial intelligence in armed conflict but it does so through existing frameworks, not AI-specific rules. Legally, the answer is clearer than it is sometimes presented politically: Humans and States remain responsible - always. Under international law, there is no such thing as “AI responsibility.” Humans retain the agency to decide if, when, and how AI is deployed, including determining the data on which these systems rely. As a result, the same technology may produce very different legal and human rights consequences depending on its use, its design, and its user. The key principle emerging, and strongly reaffirmed in the 2025 Report of the UN Secretary-General, is that: responsibility cannot be delegated to a machine. At the same time, it is important to underline that there is currently no UN treaty that specifically regulates artificial intelligence in armed conflict. Instead, international law regulates permitted and prohibited uses of force, and is applied to include those involving AI-enabled military technologies, through International Humanitarian Law (IHL) and related International Human Rights Law (IHRL) protections. These frameworks establish clear legal obligations, including: (I) the principle of distinction, requiring differentiation between civilians and combatants and prohibiting direct attacks on civilians and civilian objects; (II) proportionality, prohibiting excessive civilian harm in relation to the anticipated military advantage; (III) necessity, limiting the use of force to what is required to achieve a legitimate military objective; and, for instance, the right to privacy, as protected under the 1966 International Covenant on Civil and Political Rights.
- The private sector is transforming warfare: A defining feature of this technological advancement is that States don’t fully control the means of innovation. As highlighted throughout the lecture, AI infrastructure, models, and capabilities are largely developed by private companies - and Governments increasingly rely on these actors for military modernization and operational capacity, while falling behind to impose limitations through national regulations and, at times, choosing to not impose national regulations in order to not fall behind in the AI arms race. This closed circle creates new accountability politicization, blurred lines between public and private responsibility, and an urgent need for ethical alignment between States and industry, and it can be broken only by States adopting normative limitations in the field of AI.
- Security Council debates can shape a future treaty or set of principles, indirectly: The United Nations Security Council’s debates play a crucial role in setting the global agenda, defining acceptable norms, and creating political pressure and expectations. Over time, these discussions can contribute to the development of customary international law, and the emergence of a treaty, or of a norm (included in a treaty), or of a set of guiding principles, on the use of AI during military conflicts.
- The central red line is that the human element must remain: Across all UN discussions - legal, political, ethical - one principle stands out as non-negotiable: the decision over life and death, in the context of conflicts between states, must remain human. This is not only a legal requirement, it is a civilizational boundary, and thus everything that follows, whether norms, frameworks, or future treaties, will ultimately be measured against this principle. This is not only a legal requirement, it is a civilizational boundary, and thus everything that follows, whether norms, frameworks, or future treaties, will ultimately be measured against this principle.
