continent. maps a topology of unstable confluences and ranges across new thinking, traversing interstices and alternate directions in culture, theory, biopolitics and art.
Issue 4.4 / 2015: 20-27

Deadly Algorithms

Susan Schuppli

Can Legal Codes Hold Software Accountable for Code That Kills? 

Algorithms have long-adjudicated over vital processes that help to ensure our wellbeing and survival. From pacemakers that maintain the natural rhythms of the heart, genetic algorithms that optimize emergency response times by cross-referencing ambulance locations with demographic data, to early warning systems that track approaching storms, detect seismic activity, and even prevent genocide by monitoring ethnic conflict with orbiting satellites.[1] However algorithms are also increasingly being tasked with instructions to kill: executing coding sequences that quite literally execute.

 

Computing Terror

Guided by the Obama Presidency’s conviction that the war on terror can be won by ‘out-computing’ its enemies and pre-empting terrorist threats using predictive software—no doubt bolstered by the President’s reliance on big data and social media to return him to office in 2012—a new generation of deadly algorithms are being designed that will both control and manage the ‘kill list’ and along with it, the decision to strike.[2] Indeed, the recently terminated practice of ‘signature strikes’ in which data-analytics were used to determine emblematic ‘terrorist’ behaviour and match these patterns to potential targets on the ground already points to a future in which intelligence gathering, assessment, and military action, including the calculation of who can legally be killed, will largely be performed by machines based upon an ever expanding database of aggregated information.

However, this transition to execution by algorithm is not, as many have suggested, simply a continuation of killing at ever-greater distances inaugurated by the invention of the bow that separated warrior and foe. Rather, this transition can be seen as a consequence of the ongoing automation of warfare which can be traced back to the cybernetic coupling of Claude Shannon’s mathematical theory of information with Norbert Wiener’s wartime research into feedback loops and communication control systems.[3] As this new era of intelligent weapons systems progress, operational control and decision-making will increasingly be out-sourced to machines. 

In 2011 the US Department of Defence (DOD) released its ‘roadmap’ forecasting the expanded use of unmanned technologies, of which unmanned aircraft systems—drones—are but one aspect of an overall strategy towards the implementation of fully autonomous Intelligent Agents stating:

The Department of Defense’s vision for unmanned systems is the seamless integration of diverse unmanned capabilities that provide flexible options for Joint Warfighters while exploiting the inherent advantages of unmanned technologies, including persistence, size, speed, maneuverability, and reduced risk to human life. DOD envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.”[4] 

The document is a strange mix of Cold-War caricature and Fordism set against the backdrop of contemporary geopolitical anxieties, as it sketches out two imaginary vignettes to provide ‘visionary’ examples of the ways that autonomy can improve efficiencies through inter-operability across military domains, aimed at enhancing capacities and flexibility between manned and unmanned sectors of the Army, Air Force, and Navy. In these future-scenarios the scripting and casting are familiar, pitting the security of hydrocarbon energy supplies against rogue actors equipped with Russian technology. One vignette concerns an aging Russian nuclear submarine deployed by a radicalized Islamic nation-state that is beset by an earthquake in the Pacific thus contaminating the coastal waters of Alaska and threatening its oil energy reserves. The other vignette involves the sabotage of an underwater oil pipeline in the Gulf of Guinea off the coast of Africa complicated by the approach of a hostile vessel capable of launching a Russian short-range air-to-surface missile.[5] These action-film vignettes—fully elaborated across five pages of the report—stand in perplexing contrast to the claims being made throughout the rest of the document as to the sober science, political prudence, and economic rationalizations that guide the move towards fully unmanned systems.

On what grounds are we to be convinced by the vision and strategies being advanced? On the basis of a collective cultural imaginary that finds its politics within the Computer Generated Image (CGI) labs of the infotainment industry or via an evidence-based approach to solving the complex problems posed by changing global contexts? Not surprisingly the level of detail and techno-fetishism used to describe unmanned responses to these risk scenarios is far more exhaustive than the three primary challenges the report identifies as specific to the growing reliance and deployment of automated and autonomous systems. Implementing a higher degree of autonomy faces the following challenges, the report suggests:

  • Investment in science and technology (S&T) to enable more capable autonomous operations.
  • Development of policies and guidelines on what decisions can be safely and ethically delegated and under what conditions.
  • Development of new Verification and Validation (V&V) and T&E techniques to enable verifiable ‘trust’ in autonomy.[6]

The delegation of decision-making to computational regimes is of crucial consideration in so far as it raises both significant ethical dilemmas but also urgent questions as to whether existing legal frameworks are capable of attending to the emergence of these new algorithmic actors. This is especially concerning when the logic of the precedent that organizes much legal decision-making (within common law systems) has followed the same logic that organized the drone programme in the first place: namely the justification of an action based upon a pattern of behaviour that was established by prior events. This legal aporia intersects with a parallel discourse around moral responsibility; a much broader debate that has tended to structure arguments around the deployment of armed drones as an antagonism between humans and machines. As the authors of the entry on ‘Computing and Moral Responsibility’ in the Stanford Encyclopaedia of Philosophy put it: 

“Traditionally philosophical discussions on moral responsibility have focused on the human components in moral action. Accounts of how to ascribe moral responsibility usually describe human agents performing actions that have well-defined, direct consequences. In today's increasingly technological society, however, human activity cannot be properly understood without making reference to technological artefacts, which complicates the ascription of moral responsibility.”[7] 

When one poses the question: “Under what conditions is it morally acceptable to deliberately kill a human being,” one is not asking whether the law would permit such an act for reasons of imminent threat, self-defence, or even empathy for someone who is in extreme pain or in a non-responsive vegetative state. The moral register around the decision-to-kill operates according to a different ethical framework that does not necessarily bind the individual to a contract enacted between the citizen and the state. Moral positions can thus be specific to individual values and beliefs whereas legal frameworks permit actions in our collective name as citizens contracted to a democratically elected body that acts on our behalf but with which we might be in political disagreement.

Whilst it is much easier to take a moral stance towards events that we might oppose—US drone strikes in Pakistan for instance—than to justify a claim as to their specific illegality given the anti-terror legislation that has been put in place since 9/11, assigning moral responsibility, proving criminal negligence or demonstrating legal liability for the outcomes of deadly events becomes even more challenging when humans and machines interact to make decisions together. This complication is one that will only intensify as unmanned systems become more sophisticated and act as independent legal agents. In addition, the out-sourcing of decision-making to the judiciary as regards the validity of scientific evidence since the 1993 Daubert ruling—in a case brought against Merrell Dow Pharmaceuticals—has also made it difficult for the law to take an activist stance when confronted with the limitations of its own scientific understandings of technical innovation. At present it would be unreasonable to take an algorithm to court when things go awry, let alone when they are executed perfectly, as in the case of a lethal drone strike.

Focusing upon the legal dimension of algorithmic liability as opposed to more wide-ranging moral questions is not to suggest that morality and law should be consigned to separate spheres. However I do want to make a preliminary effort to think about the ways that algorithms are not simply re-ordering the fundamental principles that govern our lives, but might also be asked to provide alternate ethical arrangements derived from mathematical axioms.

 

Algorithmic Accountability

It is my contention that law, which has already expanded the category of ‘legal personhood’ to include non-human actors such as corporations, also offers ways to think about questions of algorithmic accountability.[8] Of course, many would argue that legal methods are not the best frameworks for resolving moral dilemmas, but then again nor are the objectives of counter-terrorism necessarily best serviced by algorithmic oversight. Shifting the emphasis towards a juridical account of algorithmic reasoning might prove useful when confronted with the real possibility that the kill list and other emergent matrices for managing the war on terror will be algorithmically derived as part of a techno-social assemblage in which it becomes impossible to isolate human from non-human agents. It does however ‘raise the bar’ for what we now need to ask the law to do. The degree to which legal codes can maintain their momentum alongside rapid technological change and submit ‘complicated algorithmic systems to the usual process of checks-and-balances that is generally imposed on powerful items that affect society on a large scale’ is of considerable concern.[9]

However the stage has already been set for the arrival of a new cast of juridical actors endowed perhaps not so much with freewill in the classical sense (that would provide the conditions for criminal liability), but intelligent systems which are wilfully free in the sense that they have been programmed to make decisions based upon their own algorithmic logic.[10] Whilst armed combat drones are the most publically visible of the automated military systems that the DOD is rolling out, they are but one of the many remote-controlled assets that will gather, manage, analyse, and act on the data that they acquire and process. 

Proponents of algorithmic decision-making laud the near instantaneous response time that allows such “Intelligent Agents”what some have called moral predators—to make micro-second adjustments to avert a lethal drone strike should, for example, children suddenly emerge out of a house that is being targeted as a militant hideout.[11] Indeed robotic systems have long been argued to decrease the error-margin of civilian casualties that are often the consequence of actions made by tired soldiers in the field. In addition, machines are not overly concerned with their own self-preservation, which might also cloud judgement under conditions of duress. Yet, if these “Intelligent Agents are often used in areas where the risk of failure and error can be reduced by relying on machines rather than humans . . . everywhere, the question arises: Who is liable if things go wrong?”[12] Typically when injury and death occurs to humans, the legal debate focuses upon the degree to which such an outcome was foreseeable and thus adjudicates on the basis of whether all reasonable efforts and pre-emptive protocols had been built into the system to mitigate against such an unlikely occurrence. However programmers cannot run all of the variables that combine to produce machinic decisions, especially when the degree of uncertainty as to conditions and knowledge of events on the ground is as variable as the shifting contexts of conflict and counter-terrorism. 

Werner Dahm, Chief Scientist at the USAF, stresses the difficulty of designing error-free systems: ‘You have to be able to show that the system is not going to go awry—you have to disprove a negative.’[13] Given that highly automated decision-making processes involve complex and rapidly changing contexts mediated by multiple technologies, can we reasonably expect to build a form of ethical decision-making into these unmanned systems? Further, would an algorithmic approach to managing the ethical dimensions of drone warfare, whether to strike 16-year old Abdulrahman al-Awlaki in Yemen because his father was a radicalized cleric—a fate that he might inherit—entail the same logics that characterized signature strikes, namely that of proximity to militant-like behaviour or activity?[14] The euphemistically rebranded kill list known as the “disposition matrix” suggests that such determinations can indeed be arrived at computationally. As Greg Miller notes: “The matrix contains the names of terrorism suspects arrayed against an accounting of the resources being marshalled to track them down, including sealed indictments and clandestine operations”.[15] 

Intelligent systems are arguably legal agents but not as-of-yet legal persons—although precedents pointing to this possibility have been set in motion. The idea that an actual human being or “legal person” stands behind the invention of every machine, someone who might ultimately be found responsible when things go wrong, or even when they go right, is no longer tenable. This idea on a basic level obfuscates the fact that complex systems are rarely, if ever, the product of single authorship, nor do humans and machines operate in autonomous realms. Indeed, both are so thoroughly entangled with each other that the notion of a sovereign human agent functioning outside the realm of machinic mediation seems wholly improbable. Consider for a moment only one aspect of conducting drone warfare in Pakistan—that of US flight logistics—in which we find that upwards of 165 people are required just to keep a Predator drone in the air for 24 hours, the half-life of an average mission. These personnel requirements are themselves embedded in multiple techno-social systems composed of military contractors, intelligence officers, data-analysts, lawyers, engineers, programmers, as well as hardware, software, satellite communication, and operation centres (CAOC), etc. This does not take into account the research and design infrastructure that engineered the unmanned system in the first place, designing its operating procedures, and beta-testing it. Nor does it acknowledge the administrative apparatus that brought all of these actors together to create the event we call a drone strike.[16] 

In the case of a fully automated system, decision-making is reliant upon feedback loops that continually pump new information into the system in order to recalibrate it. However, and perhaps more significantly, in terms of legal liability, decision-making is also governed by the system’s innate ability to self-educate: the capacity of algorithms to learn and modify their coding sequences independent of human oversight. Isolating the singular agent who is directly responsible—legally—for the production of a deadly harm (as currently required by criminal law) suggests, then, that no one entity beyond the Executive Office of the President might ultimately be held accountable for the aggregate conditions that conspire to produce a drone strike and with it the possibility of civilian casualties. However, given that the US does not accept the jurisdiction of the International Criminal Court and Article 25 of the Rome Statute governing individual criminal responsibility, what new legal formulations could be created that are able to account for indirect and aggregate causality born out of a complex chain of events including so called digital perpetrators?

American tort law, which adjudicates over civil wrongs, might be one such place to look for instructive models as legal claims regarding the use of environmental toxins, which are highly distributed events whose lethal effects often take decades to appear, and involve an equally complex array of human and non-human agents, have been making their way into court, although not typically with successful outcomes for the plaintiffs. The most notable of these litigations are the mass toxic tort regarding the use of Agent Orange as a defoliant in Vietnam and the Bhopal disaster in India.[17] Ultimately, however, the efficacy of such an approach has to be considered in light of the intended outcome of assigning liability, which in the cases mentioned was not so much deterrence or punishment, but compensation for damages.

 

Re-Coding the Law

Whilst machines can be designed with a high degree of intentional behaviour and will out-perform humans in many instances, the development of unmanned systems will need to take into account a far greater range of variables including shifting geopolitical contexts and murky legal frameworks when making the calculation that conditions have been met to execute someone. Building in fail-safe procedures that abort when human subjects of a specific size (children) or age and gender (males under the age of 18) appear sets the stage for a proto-moral decision making regime. However, is the design of ethical constraints really where we wish to push back politically when it comes to the potential for execution by algorithm? Or can we work to complicate the impunity that certain techno-social assemblages currently enjoy? As a 2009 report by the Royal Academy of Engineering on Autonomous Systems argues:

“Legal and regulatory models based on systems with human operators may not transfer well to the governance of autonomous systems. In addition, the law currently distinguishes between human operators and technical systems and requires a human agent to be responsible for an automated or autonomous system. However, technologies which are used to extend human capabilities or compensate for cognitive or motor impairment may give rise to hybrid agents . . . Without a legal framework for autonomous technologies, there is a risk that such essentially human agents could not be held legally responsible for their actions – so who should be responsible?”[18]

Implicating a larger set of agents including algorithmic ones who aid and abet such an act might well be a more effective legal strategy, even if expanding the limits of criminal liability proves unwieldy. As the 2009 European Center for Constitutional and Human Rights (ECCHR) Study on Criminal Accountability in Sri Lanka put it:

“Individuals, who exercise the power to organise the pattern of crimes that were later committed, can be held criminally liable as perpetrators. These perpetrators can usually be found in civil ministries such as the ministry of defense or the office of the president.”[19] 

Moving down the chain of command and focusing upon those who participate in the production of violence by carrying out orders has been effective in some cases (Sri Lanka), but also problematic in others (Abu Ghraib) where the indictment of low-level officers severed the chain of causal relations that could implicate more powerful actors. Of course prosecuting an algorithm alone for executing lethal orders that the system is in fact designed to make is fairly nonsensical if the objective is punishment. The move must rather be part of an overall strategy aimed at expanding the field of causality and thus broadening the reach of legal responsibility.

My work as a researcher on the Forensic Architecture project, alongside Eyal Weizman and many others with whom we worked on developing new methods of spatial and visual investigation for the United Nations enquiry into the use of armed drones does provide a specific vantage point for considering how machinic capacities are re-ordering the field of political action and thus calling forth new legal strategies.[20] In taking seriously the agency of things we must also take seriously the agency of things whose productive capacities are enlisted in the decision to kill. Computational regimes, in operating largely beyond the thresholds of human perception, have produced informatic conjunctions that have redistributed and transformed the spaces in which action occurs as well as the nature of such consequential actions themselves. When algorithms are being enlisted to out-compute terrorism and calculate who can and should be killed, we need to produce a politics appropriate to these radical modes of calculation and a legal framework that is sufficiently agile to deliberate over such events.

Decision-making by automated systems will produce new relations of power that we have, as of yet, an inadequate legal frameworks or modes of political resistance—and, perhaps even more importantly, insufficient collective understanding as to how such decisions will actually be made and upon what grounds. Scientific knowledge about technical processes does not belong to the domain of science alone as the Daubert ruling implies. Yet, demands for public accountability and oversight will require much greater participation in the epistemological frameworks that organize and manage these new techno-social systems and that may be a formidable challenge for all of us. Yet what sort of public assembly will be able to prevent the premature closure of a certain “epistemology of facts” as Bruno Latour would say, that are at present cloaked under a veil of secrecy called “national security interests”—the same order of facts that scripts the current DOD roadmap for unmanned systems?[21] 

In an ABC radio interview titled ‘The Future of Drone Strikes Could See Execution by Algorithm’ Sarah Knuckey, Director of the Project on Extrajudicial Executions at New York University Law School, emphasized the degree to which drone warfare has strained the limits of international legal conventions and with it, the protection of civilians.[22] The “rules of warfare” are already “hopelessly out-dated” she says and they will require “new rules of engagement to be drawn up.” Further She suggests that: “There is an enormous amount of concern about the practices the US is conducting right now and the policies that underlie those practices. But from a much longer-term perspective and certainly from lawyers outside the US there is real concern about not just what's happening now but what it might mean 10, 15, 20 years down the track.”[23] Could these new rules of engagement—new legal codes—assume a similarly pre-emptive character to the software codes and technologies that are being evolved—what I would characterize as a projective sense of the law? Might they take their lead from the spirit of the Geneva Conventions protecting the rights of non-combatants, rather than from those protocols (the Hague Conventions of 1899, 1907) that govern the use of weapons of war and are thus reactive in their formulation and event-based. In short, a set of legal frameworks that is not determined by precedent—by what has happened in the past—but by what may arguably take place in the future.

 

Previously published in Radical Philosophy, Issue 187 UK, (2014): 2-8

 

REFERENCES 

[1] See for example the satellite monitoring and atrocity evidence programmes: ‘Eyes on Darfur’. Online. Available HTTP: http://www.eyesondarfur.org (accessed 10 September 2015) and ‘The Sentinel Project for Genocide Prevention’. Online. Available HTTP: http://thesentinelproject.org (accessed 10 September 2015).

[2] Cori Crider, ‘Killing in the Name of Algorithms: How Big Data Enables the Obama Administration’s Drone War’, Al Jazeera America (2014). Online. Available HTTP: http://america.aljazeera.com/opinions/2014/3/drones-big-data-waronterrorobama.html (accessed 18 May 2014). See also the flow chart, ‘A look inside the "disposition matrix" that determines when–or if–the administration will pursue a suspected militant’, Daniel Byman and Benjamin Wittes, ‘How Obama Decides Your Fate If He Thinks You're a Terrorist,’ The Atlantic (3 Januari 2013).

[3] Contemporary information theorists would argue that the second-order cybernetic model of feedback and control, in which external data is used to adjust the system, does not take into account the unpredictability of evaluative data internal to the system resulting from crunching ever-larger datasets. Luciana Parisi, ‘Introduction’, in: Contagious Architecture: Computation, Aesthetics, and Space (Cambridge: MIT Press, 2013). For a discussion of Wiener’s cybernetics, see: Reinhold Martin, 'The Organizational Complex: Cybernetics, Space, Discourse', Assemblage (1998) vol. 37, 110.

[4] DOD, Unmanned Systems Integrated Roadmap FY2011-2036. (Washington, DC 20301: Office of the Undersecretary of Defense for Acquisition, Technology, & Logistics, 2011), 3.

[5] Ibid., 1-10.

[6] Ibid., 27.

[7] Merel Noorman and Edward N. Zalta (eds.), ‘Computing and Moral Responsibility,’ in: The Stanford Encyclopedia of Philosophy (23 May 2014). Online. Availiable HTTP: http://plato.stanford.edu/archives/sum2014/entries/computing-responsibility (accessed 10 September 2015).

[8] See: John Dewey, 'The Historic Background of Corporate Legal Personality', Yale Law Journal (1926) vol. 35, no. 6, 656, 669.

[9] ‘Workshop Primer: Algorithmic Accountability. The Social, Cultural & Ethical Dimensions of “Big Data”’ (New York: Data & Society Research Institute, 2014), 3.

[10] Gunther Teubner, ‘Rights of Non-Humans? Electronic Agents and Animals as New Actors in Politics and Law’, Journal of Law & Society (2006) vol. 33, no. 4.

[11] See: Bradley Jay Strawser, ‘Moral Predators: The Duty to Employ Uninhabited Aerial Vehicles’, Journal of Military Ethics (2010) vol. 9, no. 4.

[12] Sabine Gless and Herbert Zech, Intelligent Agents: International Perspectives on New Challenges for Traditional Concepts of Criminal, Civil Law and Data Protection (Basel, Switzerland: University of Basel, Faculty of Law, 2014).

[13] ‘The Next Wave in U.S. Robotic War: Drones on Their Own’, Agence-France Press (28 September 2012), 2.

[14] When questioned about the drone strike that killed 16-year old American-born Abdulrahman al-Awlaki, teenage son of radicalized cleric Anwar Al-Awlaki in Yemen in 2011, Robert Gibbs, former White House Press Secretary and senior adviser to President Obama’s re-election campaign, replied that the boy should have had ‘a more responsible father.’

[15] Greg Miller, ‘Plan for Hunting Terrorists Signals U.S. Intends to Keep Adding Names to Kill Lists’, The Washington Post (23 October 2012).

[16] ‘While it might seem counterintuitive, it takes significantly more people to operate unmanned aircraft than it does to fly traditional warplanes. According to the Air Force, it takes a jaw-dropping 168 people to keep just one Predator aloft for twenty-four hours! For the larger Global Hawk surveillance drone, that number jumps to 300 people. In contrast, an F-16 fighter aircraft needs fewer than one hundred people per mission.’ Medea Benjamin (ed.), Drone Warfare: Killing by Remote Control (London: Verso, 2013, updated edition), 21.

[17] See: Peter H. Schuck (ed.), Agent Orange on Trial: Mass Toxic Disasters in the Courts (Cambridge: Belknap Press of Harvard University Press, 1987). See also: Online. Available HTTP: http://www.bhopal.com/bhopal-litigation (accessed 10 September 2015).

[18] ‘Autonomous Systems: Social, Legal and Ethical Issues’ (London: The Royal Academy of Engineering, 2009), 3.

[19] ‘Study on Criminal Accountability in Sri Lanka as of January 2009’ (Berlin: European Center for Constitutional and Human Rights, 2010), 88.

[20] Notable members of the Forensic Architecture drone investigative team also included Jacob Burns, Steffen Kraemer, Francesco Sebregondi, and SITU Research. Online. Available HTTP: http://www.forensic-architecture.org/case/drone-strikes/ (accessed 10 September 2015)

[21] Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press, 2005. P. 260.

[22] Bureau of Investigative Journalism, ‘Get the Data: Drone Wars’. Online. Available HTTP: http://www.thebureauinvestigates.com/category/projects/drones/drones-graphs/ (accessed 10 September 2015).

[23] Sarah Knuckey interviewed by Annabelle Quince, ‘Future of Drone Strikes Could See Execution by Algorithm’, Rear Vision. Ed. Transcript, 2-3.