Applied Ethics | Artificial Intelligence and Robotics | Military and Veterans Studies
In 2018 the United States Department of Defense (DoD) created a new Joint Artificial Intelligence Center to study the adoption of AI by the military. Their strategy, outlined in a document entitled, “Harnessing AI to Advance Our Security and Prosperity,” proposes to accelerate the adoption of AI in the military by fostering a culture of experimentation and calculated risk taking, noting that AI will change the character of the future battlefield and, even more, the pace of battle. Is there any way to ensure that this future battlefield will be just? Can the age-old precepts of just warfare help guide our militaries as we develop and deploy autonomous weapons?
This is an Accepted Manuscript version of the following article, accepted for publication in Peace Review: Herzfeld N, Latiff R. 2022. Can lethal autonomous weapons be just? Peace Review 33(2): 213-219. https://doi.org/10.1080/10402659.2021.1998750
It is deposited under the terms of the Creative Commons Attribution-NonCommercial License, which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Herzfeld N, Latiff R. 2022. Can lethal autonomous weapons be just? Peace Review 33(2): 213-219. https://doi.org/10.1080/10402659.2021.1998750