Thoughts on autonomous systems and their use in war

Challenges exist in designing autonomous systems that can be used in a war that can adhere to International Humanitarian Law (IHL). [1] The use of autonomous systems in war raises many legal, ethical, and regulatory issues with one of the main challenges being appropriately assigning accountability for crimes committed. [2] Autonomous systems “may lead to a legal accountability gap.” I believe that using autonomous systems in war creates situations where responsibility is not always clear. To solve the accountability gap and other legal issues, I believe that a possible solution could be to use autonomous systems to target only weapon systems and not humans. [3]

Fundamental Principles of International Law are the principle of distinction and the principle of proportionality. [4] A distinction must be made between civilians and military objectives with only military objectives being targeted.[5] The principle of proportionality requires that the incidental civilian harm is minimal compared to the military advantage gained from an attack. [6] The analysis of distinction and proportionality “is highly complex and highly contextual.” [7] While some people are of the position that autonomous robots could perform better than humans and be more compliant with IHL, others have argued the opposite position citing uncertainty. [8] Given the uncertainty surrounding autonomous systems compliance with IHL, I believe that autonomous systems should be used to target only weapon systems. [9]

I am of the position that humans should be held accountable and deterred from creating non-secure machines. In IHL, with command responsibility, the person who is a leader with supervisory responsibility and “knew or should have known that subordinates were engaged in illegal activity and failed to take reasonable steps to prevent such acts” should be accountable for the crimes committed. [10] I think it would be inappropriate to apply command responsibility to autonomous systems. Humans working in the front line may be too slow to override systems, and thus, I believe it would be unrealistic to expect front line humans to be accountable. [11] I think that one solution could be to create ‘new crimes’ where criminal responsibility could shift to computer programmers, engineers, and designers who created the autonomous system. [12] However, I believe that this approach is not straightforward and would still not resolve all issues as there might not be a ‘guilty state of mind’ or a clear person that could be identified as responsible. [13]

[1] Rebecca Crootof, “The Killer Robots are Here: Legal and Policy Implications” (2015) Cardozo Law Review Vol 36

[2] Peter Margulies, “Making Autonomous Weapons Accountable: Command Responsibility for Computer-Guided Lethal Force in Armed Conflicts” (2016) Roger Williams University School of Law

[3] Chantal Grut “The Challenge of Autonomous Lethal Robotics to International Humanitarian Law” (2013) Journal of Conflict & Security Law

[4] Ibid

[5] [6] Supra 1

[7] [9] Supra 3

[10] Prosecutor v Aleksovski Judgment Case No. IT-95–14/1-T, paras. 66–81

[11][13] Supra 3

Jennifer Harding-Marlin / Attorney / jhmarlin.com

--

--

Jennifer Harding-Marlin -Citizenship by Investment

Citizenship by Investment - St.Kitts & Nevis & Canadian Attorney, Managing Director of JHMarlin Law jhmarlin.com