Print Email Facebook Twitter Deep Reinforcement Learning for Flight Control Title Deep Reinforcement Learning for Flight Control: Fault-Tolerant Control for the PH-LAB Author Dally, Killian (TU Delft Aerospace Engineering; TU Delft Control & Simulation) Contributor van Kampen, E. (mentor) van Paassen, M.M. (graduation committee) Hulshoff, S.J. (graduation committee) Sun, B. (graduation committee) Degree granting institution Delft University of Technology Programme Aerospace Engineering | Control & Simulation Date 2021-02-24 Abstract Fault-tolerant flight control faces challenges as developing a model-based controller for each unexpected failure is unrealistic, and online learning methods can handle limited system complexity due to their low sample efficiency. In this research, a model-free coupled-dynamics flight controller for a jet aircraft able to withstand multiple failure types is proposed. An offline-trained cascaded Soft Actor-Critic Deep Reinforcement Learning controller is successful on highly coupled maneuvers, including high-bank coordinated climbing turns. The controller is robust to six unforeseen failure cases, including the rudder jammed at -15°, the aileron effectiveness reduced by 70%, a structural failure, icing and a backward c.g. shift as the response is stable and the climbing turn is completed successfully. Robustness to biased sensor noise, atmospheric disturbances, and to varying initial flight conditions and reference signal shapes is also demonstrated. Subject Deep Reinforcement LearningFault Tolerant ControlIntelligent Flight ControlMachine LearningFlight Control Systems To reference this document use: http://resolver.tudelft.nl/uuid:fcef2325-4c90-4276-8bfc-1e230724c68a Part of collection Student theses Document type master thesis Rights © 2021 Killian Dally Files PDF MSc_Thesis_Killian_Dally.pdf 11.49 MB Close viewer /islandora/object/uuid:fcef2325-4c90-4276-8bfc-1e230724c68a/datastream/OBJ/view