Abstract
Robotic warfare has now become a real prospect. One issue that has generated heated debate concerns the development of ‘Killer Robots’. These are weapons that, once programmed, are capable of finding and engaging a target without supervision by a human operator. From a conceptual perspective, the debate on Killer Robots has been rather confused, not least because it is unclear how central elements of these weapons can be defined. Offering a precise take on the relevant conceptual issues, the article contends that Killer Robots are best seen as executors of targeting decisions made by their human programmers. However, from a normative perspective, the execution of targeting decisions by Killer Robots should worry us. The article argues that what is morally bad about Killer Robots is that they replace human agency in warfare with artificial agency, a development which should be resisted. Finally, the article contends that the issue of agency points to a wider problem in just war theory, namely the role of moral rights in our normative reasoning on armed conflict.
| Original language | English |
|---|---|
| Journal | Journal of Applied Philosophy |
| Early online date | 6 Mar 2016 |
| DOIs | |
| Publication status | Published - 6 Mar 2016 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 16 Peace, Justice and Strong Institutions
Fingerprint
Dive into the research topics of 'What's so bad about killer robots?'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver