Print Email Facebook Twitter A Hierarchical Maze Navigation Algorithm with Reinforcement Learning and Mapping Title A Hierarchical Maze Navigation Algorithm with Reinforcement Learning and Mapping Author Mannucci, T. (TU Delft Control & Simulation) van Kampen, E. (TU Delft Control & Simulation) Contributor Jin, Y (editor) Kollias, S. (editor) Date 2016 Abstract Goal-finding in an unknown maze is a challenging problem for a Reinforcement Learning agent, because the corresponding state space can be large if not intractable, and the agent does not usually have a model of the environment. Hierarchical Reinforcement Learning has been shown in the past to improve tractability and learning time of complex problems, as well as facilitate learning a coherent transition model for the environment. Nonetheless, considerable time is still needed to learn the transition model, so that initially the agent can perform poorly by getting trapped into dead ends and colliding with obstacles. This paper proposes a strategy for maze exploration that, by means of sequential tasking and off-line training on an abstract environment, provides the agent with a minimal level of performance from the very beginning of exploration. In particular, this approach allows to prevent collisions with obstacles, thus enforcing a safety restraint on the agent. To reference this document use: http://resolver.tudelft.nl/uuid:3d32d5d3-4c46-4a91-ba33-9a2747387987 DOI https://doi.org/10.1109/SSCI.2016.7849365 Publisher IEEE Embargo date 2018-01-01 Source 2016 IEEE Symposium Series on Computational Intelligence: Athens, Greece Event 2016 IEEE Symposium Series on Computational Intelligence, 2016-10-06 → 2016-10-09, Athens, Greece Part of collection Institutional Repository Document type conference paper Rights © 2016 T. Mannucci, E. van Kampen Files PDF Mannucci_A_Hierarchical_M ... evised.pdf 686.93 KB Close viewer /islandora/object/uuid:3d32d5d3-4c46-4a91-ba33-9a2747387987/datastream/OBJ/view