AI Defense Contracts and the Legal Limits of Autonomous Weapons

By: Spencer Robinson

Introduction 

Artificial intelligence is rapidly transforming modern warfare. Militaries increasingly rely on algorithmic systems to analyze battlefield data, assist with targeting decisions, and operate unmanned platforms. As these technologies become more sophisticated, some systems are moving toward greater levels of autonomy, raising difficult questions under international humanitarian law (IHL). Autonomous weapons systems, generally defined as weapons capable of selecting and engaging targets without human intervention, challenge legal frameworks that were designed around human decision-making in the use of force.

The growing role of private technology companies in developing military AI further complicates the legal landscape. Unlike traditional defense contractors producing physical weapons, many AI systems are dual-use technologies developed by private firms and later integrated into military operations. As a result, responsibility for potential violations of IHL may become diffused across states, military commanders, and private developers. The increasing privatization of military AI exposes an accountability gap in existing international law and highlights the need for clearer rules governing autonomous weapons systems.

What Are Autonomous Weapons Systems?

Autonomous weapons systems (AWS) generally refer to weapons that can select and engage targets without direct human intervention once activated. While fully autonomous weapons do not exist yet, many militaries already deploy systems with significant autonomous features, including missile defense platforms, loitering munitions, and AI-assisted targeting tools. These technologies vary in the level of human involvement in the use of force. Scholars and policymakers often distinguish between systems that are “human-in-the-loop,” where a human must authorize a strike; “human-on-the-loop,” where humans supervise automated systems and can intervene if necessary; and “human-out-of-the-loop,” where machines independently identify and attack targets.

The rapid development of AWS is closely tied to the growing role of private technology companies in defense innovation. Unlike traditional weapons systems developed primarily by defense contractors, many modern military AI capabilities rely on software, data processing, and machine-learning tools developed in the commercial technology sector. As governments increasingly rely on private firms to design and supply these technologies, the line between civilian innovation and military application becomes increasingly blurred. This shift raises important legal questions about how existing international humanitarian law should regulate systems whose decision-making processes may be partially or wholly automated.

The Accountability Gap in Autonomous Warfare

AWS challenge one of the foundational assumptions of international humanitarian law: that humans ultimately make decisions about the use of force. The core IHL principles of distinction, proportionality, and precaution require human judgment in evaluating military necessity and assessing the risk of civilian harm. When an autonomous system independently identifies and engages a target, however, the decision-making process becomes partially or entirely delegated to an algorithm. This shift raises difficult questions about how responsibility should be assigned when violations occur.

Traditionally, international law attributes responsibility for unlawful conduct in armed conflict to states and, in some circumstances, to individual military commanders. Autonomous systems complicate this framework. If an AI-enabled weapon misidentifies a target due to flawed training data, unexpected environmental conditions, or emergent machine-learning behavior, it may be difficult to determine whether the error should be attributed to the state deploying the weapon, the commander overseeing its use, or the private developers who designed the underlying algorithm.

The growing involvement of private technology companies further blurs these lines of responsibility. Many military AI systems rely on software and machine-learning models developed by private firms that may have limited visibility into how their technologies are ultimately deployed. Yet international humanitarian law largely regulates state conduct, not the actions of private developers whose technologies shape battlefield decision-making. As a result, the increasing privatization of military AI risks creating an accountability gap in which harmful outcomes may occur without clear mechanisms for legal responsibility.

Without clearer rules governing the development and deployment of autonomous weapons, this uncertainty regarding responsibility threatens the accountability structures that international humanitarian law relies upon to regulate the use of force.

Emerging Regulatory Efforts

Recognizing these challenges, states and international organizations have increasingly begun to examine how autonomous weapons systems should be regulated. Much of this debate has taken place within the framework of the United Nations Convention on Certain Conventional Weapons (CCW), where states have convened expert groups to study emerging technologies in lethal autonomous weapons systems (LAWS). While states have not yet reached agreement on a binding treaty, discussions have focused on the importance of maintaining “meaningful human control” over the use of force and ensuring that autonomous systems comply with existing principles of international humanitarian law.

Additionally, several states have adopted domestic policies governing the development and deployment of autonomous military technologies. The United States, for example, requires senior-level review of autonomous weapon systems and emphasizes the need for appropriate levels of human judgment in decisions involving the use of force. Scholars and policymakers have also proposed expanding existing Article 36 weapons review procedures, which require states to assess whether new weapons comply with international law, to include greater scrutiny of algorithmic systems and their decision-making processes.

Although these efforts represent important steps, current regulatory frameworks remain fragmented and largely nonbinding. As autonomous systems become more capable and widely deployed, international law will likely face increasing pressure to develop clearer and more uniform rules governing their use.

Conclusion

Artificial intelligence is poised to play an increasingly central role in modern warfare, but existing legal frameworks were not designed with autonomous decision-making systems in mind. As militaries adopt AI-enabled technologies and rely more heavily on private companies to develop them, responsibility for battlefield decisions may become dispersed across multiple actors. This diffusion of responsibility risks creating an accountability gap that undermines the enforcement of international humanitarian law’s core principles.

To preserve meaningful legal oversight of the use of force, international law must adapt to the realities of autonomous warfare. Strengthening weapons review procedures, clarifying the role of human control in targeting decisions, and developing clearer standards governing the development and deployment of autonomous systems will be critical steps toward ensuring that emerging military technologies remain consistent with the humanitarian protections that international law seeks to safeguard.

Leave a Reply

Your email address will not be published. Required fields are marked *