From Artificial Intelligence itself to its integration into weapons systems, intelligence collection and analysis, decisionmaking, and other areas, AI has the potential to radically upend our traditional ways of thinking about national security. Center scholarship and projects and analyses by our faculty and fellows, offer insight into many of the challenges posed.

Featured

Developing a systems-based approach to ensure the ethical development and use of AI-enabled weapons.

AI-enabled weapons pose serious potential risks to humans’ ability to control violence. Ensuring that we have evidence-based practices in place to manage these risks is necessary to mitigate them and justify their use.

View Project

This Article explores the interaction of artificial intelligence (AI) and machine learning with international humanitarian law (IHL) in autonomous weapon systems (AWS). Lawyers and scientists repeatedly express a need for practical and objective substantive guidance on the lawful development of autonomy in weapon systems.

View Article

Lieutenant Colonel Schuller (USMC) is Associate Director of the Stockton Center for the Study of International Law at the U.S. Naval War College, where his research is focused on autonomy in weapon systems. LtCol Schuller is a Fellow with the Georgetown Center on National Security and the Law. He served as an artillery officer before becoming a judge advocate.

Listen Here

The decision to kill other humans lies at the heart of concerns over Autonomous Weapon Systems (AWS).  Human judgment regarding whether lives will be taken and objects destroyed during armed conflict inherently triggers an evaluation under International Humanitarian Law (IHL) as to the lawfulness of an attack.  As the link degrades between human interaction and lethal action by weapon systems, how can legal advisors evaluate who “decided” to kill?  Is it possible that human control over AWS might be diluted to the point where it would no longer be reasonable to say that a human decided that such a weapon would kill?

View Article

Debates on autonomous weapon systems have expanded significantly in recent years in diplomatic, military, scientific, academic and public forums. In March 2014, the ICRC convened an international expert meeting to consider the relevant technical, military, legal and humanitarian issues. Expert discussions within the framework of the UN Convention on Certain Conventional Weapons (CCW) were held in April 2014 and continued in April 2015 and April 2016.

View Report

This paper maintains that the just war tradition provides a useful framework for analyzing ethical issues related to the development of weapons that incorporate artificial intelligence (AI), or “AI-enabled weapons.” While development of any weapon carries the risk of violations of jus ad bellum and jus in bello, AI-enabled weapons can pose distinctive risks of these violations.

View Paper

PODCAST

Fully Autonomous Weapons of War

As autonomous weapons become a reality on the modern battlefield, their ethical and legal implications are sparking intense debate. Georgetown Law Professor Mitt Regan dives into the complexities of AI-enabled weapon systems, exploring how these technologies challenge the principles of international humanitarian law (IHL). With conflicts in Ukraine and Israel showcasing the rapid deployment of AI-driven military tools.

This interdisciplinary project engages with defense practitioners and policymakers to develop theory-grounded and actionable risk assessment and mitigation strategies for AI-enabled weapons.

There has been a steep increase in the reliance on AI tools for warfighting. AI has the potential to enable faster and more informed decision-making on the battlefield: better surveillance, tracking, target identification, and validation, as well as more data-driven support for considering different courses of action and selecting means to pursue them.

Learn More About the Project

Scholars

David Koplow

Scott K. Ginsburg Professor of Law