Print Email Facebook Twitter Meaningful Human Control Over Autonomous Systems Title Meaningful Human Control Over Autonomous Systems: A Philosophical Account Author Santoni De Sio, F. (TU Delft Ethics & Philosophy of Technology) van den Hoven, M.J. (TU Delft Values Technology and Innovation) Department Values Technology and Innovation Date 2018 Abstract Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal–political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars. Subject meaningful human controlautonomous weapon systemsresponsibility gapethics of roboticsresponsible innovation in roboticsvalue-sensitive design in roboticsAI ethicsethics of autonomous systemsOA-Fund TU Delft To reference this document use: http://resolver.tudelft.nl/uuid:f1a5cd1a-ea29-4495-a971-96ddc4e22cb6 DOI https://doi.org/10.3389/frobt.2018.00015 ISSN 2296-9144 Source Frontiers In Robotics and AI, 5 Part of collection Institutional Repository Document type journal article Rights © 2018 F. Santoni De Sio, M.J. van den Hoven Files PDF frobt_05_00015.pdf 377.31 KB Close viewer /islandora/object/uuid:f1a5cd1a-ea29-4495-a971-96ddc4e22cb6/datastream/OBJ/view