Print Email Facebook Twitter Learning, Transfer and Use of Affordances in Robotics Tasks Title Learning, Transfer and Use of Affordances in Robotics Tasks Author Ciftci, O.A. Contributor Babuska, R. (mentor) Lopes, D.G. (mentor) Wang, C. (mentor) Faculty Mechanical, Maritime and Materials Engineering Department Delft Center for Systems and Control Programme Systems and Control Date 2014-02-19 Abstract Utilization of robotics in various applications increase with technological improvements. Few recent examples are Roomba vacuum cleaner robot for homes, industrial robots for production lines and unmanned robots for dangerous environments, such as NASA Mars Rover. One important aspect of robotics is the intelligence mechanism. However it is not always easy to design controllers and decision makers since mechanical designs of robots are becoming more complex as well as robotic tasks. Furthermore robots are expected to be able to cope with unforeseen interaction in these environments. Therefore a different approach than hard-coding the robotic agents beforehand should be used, such as learning. The learning approach has its own difficulties, such as long learning sessions and not being able to transfer knowledge to new tasks efficiently to avoid performing another learning session. Usage of affordance concept is a solution for these difficulties since it maintains some kind of transition model of the environment and the robot. Affordance concept in combination with action planning enables to use the knowledge gained in previous tasks, environments or robots. This reduces or completely eliminates the learning time required for the new task, environment or robot. This methodology also enables agents to successfully perform tasks in environments which have known properties but are new to the agent without additional learning, which is not always the case with transfer learning for Reinforcement Learning (RL) [1]. By successfully it is meant that the agent completes the task in an optimal manner. For example in a navigation task this will be reaching the destination in shortest time or lowest amount of steps. Knowledge transfer between robotic agents will ideally decrease the required learning time and trials of new robotic agents in an environment. This can be done via passing some of the learned knowledge from the robotic agents which are already functioning in that very same environment. An agent which hasn’t learned anything about the environment will not even know the basic properties, such as a wall is not being pushable and a docking station affords charging. A common representation will be time saving for all agents that will benefit from such information. Indeed this is the case with the results presented in this thesis work. Subject robotaffordance learningaffordance transferuse of affordance To reference this document use: http://resolver.tudelft.nl/uuid:989f04ed-d5ee-4e7f-8a65-a1795291ac1c Part of collection Student theses Document type master thesis Rights (c) 2014 Ciftci, O.A. Files PDF Osman-Master-Thesis-V2.pdf 1.89 MB Close viewer /islandora/object/uuid:989f04ed-d5ee-4e7f-8a65-a1795291ac1c/datastream/OBJ/view