Print Email Facebook Twitter An Empirical Approach to Reinforcement Learning for Micro Aerial Vehicles Title An Empirical Approach to Reinforcement Learning for Micro Aerial Vehicles Author Junell, J. (TU Delft Control & Simulation) Contributor Mulder, Max (promotor) Chu, Q. P. (promotor) Degree granting institution Delft University of Technology Date 2018-12-10 Abstract The use of Micro Aerial Vehicles (MAVs) in practical applications, to solve real-world problems, is growing in demand as the technology becomes more widely known and accessible. Proposed applications already span a wide berth of fields like military, search and rescue, ecology, artificial pollinators, and more. As compared to larger Unmanned Aerial Systems (UAS), MAVs are specifically desirable for applications which take advantage of their small size or light weight – whether that means being discreet, having insect-like maneuverability, operating in small spaces, or being more inherently safe with respect to injury towards people. In some cases, MAVs work under conditions where autonomy is needed. The small size of MAVs and the desire for autonomy combine to create a demanding set of challenges for the guidance, navigation, and control (GNC) of these systems. Limitations of on-board sensors, difficulties in modeling their complex and often time varying dynamics, and limited on-board computational resources, are just a few examples of the challenges facing MAV autonomy... Subject Reinforcement LearningMicro Aerial VehicleQuadrotorPolicy IterationHierarchical Reinforcement LearningState AbstractionTransfer learning To reference this document use: https://doi.org/10.4233/uuid:32765560-5fde-4c86-a778-decdc3eb5294 ISBN 978-94-6186-965-4 Part of collection Institutional Repository Document type doctoral thesis Rights © 2018 J. Junell Files PDF dissertation_jjunell_20181210.pdf 33.52 MB Close viewer /islandora/object/uuid:32765560-5fde-4c86-a778-decdc3eb5294/datastream/OBJ/view