Print Email Facebook Twitter Aligning AI with Human Norms Title Aligning AI with Human Norms: Multi-Objective Deep Reinforcement Learning with Active Preference Elicitation Author Peschl, Markus (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Cavalcante Siebert, L. (mentor) Zgonnikov, A. (mentor) Oliehoek, F.A. (mentor) Kurowicka, D. (graduation committee) Degree granting institution Delft University of Technology Programme Applied Mathematics Date 2021-10-08 Abstract The field of deep reinforcement learning has seen major successes recently, achieving superhuman performance in discrete games such as Go and the Atari domain, as well as astounding results in continuous robot locomotion tasks. However, the correct specification of human intentions in a reward function is highly challenging, which is why state-of-the-art methods lack interpretability and may lead to unforeseen societal impacts when deployed in the real world. To tackle this, we propose multi-objective reinforced active learning (MORAL), a novel framework based on inverse reinforcement learning for combining a diverse set of human norms into a single Pareto optimal policy. We show that through the combination of active preference learning and multi-objective decision-making, one can interactively train an agent to trade off a variety of learned norms as well as primary reward functions, thus mitigating negative side effects. Furthermore, we introduce two toy environments called Burning Warehouse and Delivery, which allow for studying the scalability of our approach in both size of the state space and reward complexity. We find that through mixing expert demonstrations and preferences, we can achieve superior efficiency compared to employing a single type of expert feedback and, finally, suggest that unlike previous literature, MORAL is able to learn a deep reward model consisting of multiple expert utility functions. Subject Active LearningInverse Reinforcement LearningMulti-Objective Decision-MakingValue AlignmentDeep Learning To reference this document use: http://resolver.tudelft.nl/uuid:f80e69f0-716d-423a-8124-b834984b7fc5 Part of collection Student theses Document type master thesis Rights © 2021 Markus Peschl Files PDF Peschl_Value_Alignment_Repo.pdf 34.42 MB Close viewer /islandora/object/uuid:f80e69f0-716d-423a-8124-b834984b7fc5/datastream/OBJ/view