7 Rules Regulating Means and Methods of Modern Day Warfare: The Drone Debate
Prof. Vik Kanwar
Learning Outcomes
- The students will learn about the implementation of principles of IHL vis-à-vis the technological advancement in the modern day warfare and how the legal principles have been evolved to equip this intercourse between law and constant technological evolution.
- The students will also learn about the changing patterns of warfare and how IHL has theorised legal principles to mitigate the effect of use of technology in war.
Introduction
Many people watching unfortunate stories of civilian casualties in America’s “drone war” in Pakistan and Afghanistan assume there must be a basic right against such killing, or that at the very least that international law condemns these acts as illegal. Under Human Rights Law (HRL), one might view these “targeted killings” as extra-judicial killings, or in any case a deprivation of the right to life. Members of the UN have a Charter-based duty to take joint and separate action in order to effectuate “universal respect for, and observance of human rights,” and many are also signatories to the ICCPR, which sets out these rights in detail, including crucially the right to life. If we accept that the proper lex specialis (see previous module) governing armed conflict is IHL and not HRL, however, then the matter is not so simple. In IHL, killing of combatants is actually permissible, and even the incidental killing of civilians can be excused based on particular principles.
This chapter will explore the IHL-based regulations covering this area in detail—their current possibilities and their limits—before turning back to HRL for further guidance. The technological post-humanisation” of war— not the perennial “inhumanity” of war, but the removal of humans from the battlefield—is an issue that IHL has only recently had to grapple with. This section aims to get the reader to appreciate the manner in which IHL has had to evolve to accommodate a change in technology and where the absence of combatants is accompanied by the appearance of technologically precise but legally slippery proxies. This module attempts to ground the understanding of IHL laid out in the previous module in a more contemporary setting, thereby acquainting the reader with how the principles of International Humanitarian Law (IHL) are applicable to dynamic changes in technology.
This chapter attempts to use the grounding of the debates to show that IHL’s focus has now moved from the banning of particular weapons (which remains rare) to the regulation of the method of their use. So the question is no longer whether the use of a particular weapon violates IHL, but whether its use in a particular scenario was justified and legitimate. The module also goes on to ground the readers understanding of how three principles— (1) distinction, (2) prohibition of unnecessary suffering and (3) proportionality— have been adapted to take into account the changing trends.
Finally, before defaulting to easier and more reassuring standards derived from Human Rights Law or imagining idealistic but as yet non-existent disarmament treaties, this module urges the readers to think critically about the post-human era. Even if military personnel are put at greater distances from the battlefield, the equally human targets are not. And even if at the moment drones or warbots are not completely autonomous, it is worth considering how IHL would have to evolve in the event that they become completely autonomous. How would notions of responsibility and accountability change? These questions are becoming more relevant with each passing day.
The Drone Debate: An IHL Approach
Over the past few years, as the use of increasingly “automated” (though not completely autonomous) weapons has increased. Names for these include “drones,” “warbots,” “robotic weapons, unmanned combat vehicles [UCVs], or unmanned aerial vehicles [UAVs]). Their use has increased and become visible in combat zones including Iraq, Pakistan, and Afghanistan. The topic of the regulation of these weapons has pervaded academic theses, mainstream media attention, and books. Each account provides speculation on the application of IHL to robotic weapons. The need for IHL to evolve to keep itself relevant is historically documented. Any technological change has a tangible impact on the application on the relevance of IHL. Questions such as where to attribute responsibility in the event of a drone attack is perplexing. Instead of understanding the need for the evolution of IHL, numerous scholars have instead argued that we must ban any form of technological change.
Fear of New Weapons
The concern is not the introduction of robots into the battlefield, but the gradual removal of humans. In this context, the gradual removal of humans implies that humans no longer control the weaponry. This could be problematic in the context of IHL implementation because using robots implies that there is no control in the means and outcomes and that there is an inability to impute responsibility to a party in an armed conflict. As a result of using robots, it results in the loss of respect for IHL norms. The groundwork for this fear is a conservative idea that IHL is static and simply can’t evolve to keep up with technological change.
Weapons: Means and Methods
Much like the use of unmanned war drones, the latest controversies about new means and methods of warfare have been about either cyberwarfare or autonomous (robotic) weapons. Cyberwarfare As a historical matter, the introduction of novel weapons such escalation of weapon capabilities (from fists to stones to nuclear and robotic weapons) has always been greeted with suspicion:
“The earliest warriors, accustomed to conduct[ing] hostilities by using each must have regarded the first appearance of more advanced technologies as violating the laws of war.”
The latest controversies about new means and methods of warfare have been about either cyberwarfare or autonomous (robotic) weapons. Drone warfare has been vulnerable to this critique. According to Vik Kanwar, “as with many weapons in the past, novelty cannot be equated with illegality.”
Increasingly it is no longer the weapon itself that is focused on, but the method of its deployment.
While in the recent humanitarian phase, the chivalrous ideal of “the right of Belligerent Parties to choose methods or means of warfare” like duelists agreeing upon “pistols at dawn” is no longer unlimited, similarly, IHL is no longer focused on the banning of entire classes of weapons. Instead, limitations are placed on the manner of a weapon’s use.
The key issues concerning the deployment of unmanned systems are:
Whether the use of a weapon is novel or not, any weapon is subject to the general rules and principles of customary and treaty law of international armed conflict as well as to any other treaty law applicable for contracting parties.
There are three Relevant Principles that are identifiable from the principles of customary and treaty law on armed conflicts :
- Distinction (between combatants and non-combatants and between military objectives and civilian objects);
- The prohibition on causing unnecessary suffering to combatants;
- Proportionality
Further, The HPCR MANUAL provides that weapons used in air and missile warfare must comply with:
(a) The basic principle of distinction between civilians and combatants and between civilian objects and military objectives.
Consequently, it is prohibited to conduct air or missile combat operations which employ weapons that
(i) cannot be directed at a specific lawful target and therefore are of a nature to strike lawful targets and civilians or civilian objects without distinction; or
(ii) The effects of which cannot be limited as required by the law of international armed conflict and which therefore are of a nature to strike lawful targets and civilians or civilian objects without distinction;
(b) The prohibition of unnecessary suffering or superfluous injury.
Consequently, it is prohibited to conduct air or missile combat operations which employ weapons that are calculated, or of a nature, to cause unnecessary suffering or superfluous injury to combatants
Formally, IHL would require at a first phase the analysis of whether a weapon is of a prohibited nature, and then look specifically at its use according to a separate analysis. Art. 36 of Additional Protocol I to the Geneva Conventions (AP I), which advocates a preventive approach by requiring contracting parties to determine whether the study, development or acquisition of a new weapon would be contrary to the provisions of Additional Protocol I.
There is no automatic prohibition on robotic weapons. One must first find instances where these weapons cannot be properly targeted or cause excessive injury to see how intrinsic these characteristics are to the technology itself. Indeed is no automatic prohibition on robotic weapons. One must find instances in which their use would be indiscriminate, disproportional, or cause excessive injury, rather than focusing on whether certain characteristics are intrinsic to the weapons themselves. As the development of unmanned weapons accelerates, it may be time to reverse the order of conduct-nature analysis once again. In the meantime, it is not IHL, but treaty-based regimes that would ensure that their very use is put into question: the most reliable way to outlaw the use of specific weapons or at least ensure their review, is for states to pursue a multilateral convention banning or stigmatising weapons of that kind. Therefore, it is important for the reader to note, that, when IHL is imagined outside the static box of the fear thesis, it seems to accurately regulate a scenario where excessive force is used through a Drone attack.
Law, War, and Technological Change
Automation of warfare does not yet mean complete “autonomy” of weapons (this distinction will be discussed in detail infra), but rather that at least some life and death decisions will be made on the battlefield without direct input from humans. What is referred to as “autonomy” is not artificial intelligence capable of supplanting human responsibility; instead it is an increase in time and distance between human actions and their results. In particular, there is an assumption that agency or responsibility should be distributed as though robots are combatants rather than weapons.
Singer sometimes confuses the matter by taking an overly anthropomorphic view of autonomy, treating warbots as the most irregular of “combatants.” Rather than being artificial persons, such as states or corporations, they more clearly belong to the category of weapons whose use is considered an extension of human action, as “means and methods” of combat.
Technology has already distanced soldiers spatially and temporally from the killing they cause, increasing asymmetrical safety between belligerents.
Given the speed of technological change, anticipating the advent of autonomous weapons might not be a bad idea, and interventions might be sought in the engineering of norms as well as technology. The greatest obstacles to automated weapons on the battlefield are likely to be legal and ethical concerns. Forward-thinking scholars have taken up the challenges resulting from the interplay of law and technology. Both rules and weapons can be re-tooled to accommodate the other.
The Continuing Possibilities of Human Rights Law
We have noted in past modules the dangers of muddling humanitarian law with human rights law or the law of self-defence. Indeed, the US has a parallel justification for the use of drones grounded not in IHL but in the law of self-defence. But perhaps the last hope to restore “humanity” in this debate is to look at the issue from the point of view of not “weapons law” but the law of targeted killing, and therefore human rights law (HRL) once again. For example, using a HRL approach, a recent Amnesty International report urges the US to conduct a thorough, impartial and independent investigation whether certain CIA personnel may be guilty of “arbitrary” and “extrajudicial executions” in violation of international law.
The advantages of human rights law (HRL) would lie in that it applies at all times and in all contexts, including during war. For example, there are no geographic or contextual limitations with respect to the UN Charter-based duty of all members to take joint and separate action in order to effectuate “universal respect for, and observance of human rights.” (The preamble to the International Covenant on Civil and Political Rights (ICCPR) affirms this universal reach). How practical is this, however?
Should Drone Attacks be Treated as “Arbitrary” and “Extrajudicial Executions under HRL?
It might also seem that because the current targeted killings are not in the context of a classic armed conflict, which leads to an even greater presumption that HRL might be the appropriate body of law here. But problems arise.
If we switched to an entirely HRL based approach, we would treat the question as one of domestic law. But whose domestic law? Let us take the example (however unlikely) of an American drone strike in India. What law would actually be applicable? We could refer to the Indian domestic law under which for any deprivation of life or liberty there must be a procedure established by law, which is not arbitrary and unreasonable. Or else we could seek extraterritorial application of the U.S. Constitution, particularly the parts of the Bill of Rights concerning due process and punishment, the Fifth Amendment (Grand Jury, Double Jeopardy, Self-Incrimination, Due Process), Sixth Amendment (Criminal Prosecutions – Jury Trial, Right to Confront and to Counsel); and Eighth Amendment (Excess Bail or Fines, Cruel and Unusual Punishment). In these ways, the general human right to freedom from “arbitrary” deprivation of life will only be applicable with respect to those persons who are within the jurisdiction, actual power, or “effective control” of the state or other entity using a drone.
Faced with these choices, and the unlikelihood of being able to prosecute the handler of a drone strike as someone conducting an execution, the only options left are to privatise the killing (thus invoking domestic criminal law), or to fall back on IHL.
Under IHL, many targeted killings are considered lawful including targeting of those who are taking a direct part in hostilities (DPH) (i.e., those who are DPH and are targeted during an armed conflict)
If we apply a separate regime of self-defense (which is part of the American approach), then it is possible to target those who are taking a direct part in ongoing armed attacks (DPAA) (i.e, those who are DPAA and are targeted during permissible measures of self-defense in time of peace or war).
What about a “convergence” approach to IHL and HRL. If convergence means convergence, and not the eclipsing of one system by the others, then the most likely reading is that if human right to freedom from “arbitrary” killing did apply:
Lawful targeting of such persons in compliance with the principles of distinction, reasonable necessity, and proportionality under the law of self-defence or the laws of war will not be “arbitrary” within the meaning of human rights law (i.e., within the meaning of Article 6 of the ICCPR). Additionally, compliance with the principles of distinction, reasonable necessity, and proportionality will provide a higher form of protection than a general human rights test based merely on what is or is not arbitrary in a given circumstance.
This leads to a certain kind of circularity, and ultimately it remains largely within the ambit of IHL.
Other Possibilities in International Law
Some authors think that the deficiencies of IHL, which gives weak guidance and insufficient constraints on these weapons, need to be addressed by future arms control agreements limiting the use of autonomous weapons. If we were to develop treaties, we must define autonomous weapons under international law and agree on permissible roles and functions for these weapons. First, the scope of the treaty would rely upon an agreement on a definition of a robotic or “autonomous” weapon. After arriving at a definition that is both precise enough to exclude irrelevant technologies while capacious enough to include future developments, the real challenge will be the content of regulation. When should the armed forces be allowed to use them? For example, states might bind themselves to use autonomous weapons only for defensive functions, such as guarding military facilities and no-fly zones.
Another approach to arms control agreements has been to limit the number of weapons, a step that states might take to prevent letting loose too much uncontrolled or poorly monitored weapons in the world. An analogy could be made to the continued development of nuclear weapons even as there has been effective international pressure keeping their use in check since the end of the World War II. It is unclear, however, whether such an analogy represents a model for effective deterrence. On the one hand, nuclear weapons have not been used, but on the other, stockpiles have continued to develop despite the supposed limitations.
A third approach would be to use autonomous weapons only for difficult humanitarian missions not involving the use of force, such as the clearing of landmines. Such agreements will only work if states could agree upon definitions and classifications, and if these definitions keep pace with changes in technology. At the moment, the present generation of drones has not inspired a significant movement towards an outright ban (as in case of landmines). Nor do we see states adopting subtler schemes for limitations of numbers, or monitoring and compliance schemes.
Some authors have even imagined that the technology itself could be programmed and guided by rules of IHL. In a recent book, Ronald Arkin is optimistic about the compatibility of autonomous weapons with international law and the conventions of war for non-proliferation. He believes that robots can be created to not only to conform to international law, but to actually “outperform” human soldiers in their capacity to for the “ethical” (rule-based) use of lethal force. He puts forth a positive agenda for designing autonomous systems programmed with “artificial conscience” based in part on the rules of IHL and weapons law. Arkin provides examples that illustrate autonomous systems’ potential to internalise respect for the ethical use of force (including relevant laws of war) and is just as pessimistic when he examines why soldiers in battle fail regarding ethical decisions. Arkin’s utopianism remains a step ahead of the futurism of the other authors reviewed above. But questions remain. In the event of failure, should IHL follow the designer or the unit used to deploy the weapons? Arkin avoids any implication that whatever ethical imperatives are programmed, a weapon whose use or misuse cannot be attributed to any human agent is dangerous from the outset. This brings us back to Singer’s intuition that putting humans at a distance from the battlefield endangers compliance with the governing rule system and the applicability of the rule system itself.
Feedback Loop between IHL and Technology in Warfare
The introduction of sophisticated robotic weapons into combat introduces sufficiently profound changes in the conduct of armed conflict to pose a challenge to existing IHL. It is possible that the law will expand to incorporate that which arguably was previously outside its reach. One suggestion would be to look beyond the weapon to find the human agent responsible. Here the question of applicability of IHL must be revisited in each instance. This includes the unlawful participation of civilians in hostilities. For instances in which human agency becomes so attenuated as to seem to disappear from view, attribution must be identified under a complex variable in which deployment must be traced and programming stands in for command. This extension of law is already at work and may reach civilian computer technicians thousands of miles away from the battlefield. With modern technologies such as long-range missiles and unmanned missile-bearing planes, the focus on a well-articulated weapons law is useful to IHL, which has always struggled to keep pace with technological innovations in the means and methods of combat.
Conclusion
Succinctly, we have learnt about the novel strategies used to conduct warfare in the ‘modern’ times and the resultant reactions from the Global community assisting in the restructuring of legal principles which are conducive to equip the states to mitigate the use and impact of technological advancement. Further, citing recent major outbreaks, we learnt about the novel yet lethal modus operandi used in the name of the technology that has most certainly made it difficult to pin the consequential responsibility. It is essential to begin with deliberations and modifications to the international legal framework which can assist a state or the international community to shield themselves against overarching using of autonomous warfare or cyberwarfare posing a threat to the security at all times.
you can view video on Rules Regulating Means and Methods of Modern Day Warfare: The Drone Debate |
Reference
- Vik Kanwar, Post-Human Humanitarian Law: The Law of War in the Age of Robotic Weapons, HARVARD NATIONAL SECURITY JOURNAL (2011) (Sections of the current module are adapted from this article, and used with permission of Author, who is sole owner of the copyright. The article is licensed generally for non-profit and educational purposes). http://harvardnsj.org/wp-content/uploads/2011/02/Vol-2-Kanwar.pdf.
- Jordan Paust, Drone Attacks Can Be Justified Under International Law, JURIST – Forum, October 23, 2013, http://jurist.org/forum/2013/10/jordan-paust-drones-justification.php.
- Program on Humanitarian Policy and Conflict Research, HPCR Manual on International Law Applicable To Air And Missile Warfare 8 (2009), available at http://ihlresearch.org/amw/HPCR%20Manual.pdf.