Print Email Facebook Twitter Fictional Co-Play for Human-Agent Collaboration Title Fictional Co-Play for Human-Agent Collaboration: Evaluating state-of-the-art reinforcement learning technique for adaptability to human collaborators Author Ordonez Cardenas, Nathan (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Oliehoek, F.A. (mentor) Loftin, R.T. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2022-06-27 Abstract A longstanding problem in the area of reinforcement learning is human-agent col- laboration. As past research indicates that RL agents undergo a distributional shift when they start collaborating with human beings, the goal is to create agents that can adapt. We build upon research using the two-player Overcooked environment to repro- duce a simplified version of the Fictitious Co-Play algorithm in order to confirm past found improvements at a smaller scale of training and using Self-Play and Population- based trained algorithms as the baselines for comparison. We find that the agent on average slightly outperforms both baseline algorithms when evaluated using a human proxy. We also find high cross-seed variance in performance, indicating the potential for further hyperparameter tuning. Subject Reinforcement LearningAd-hoc teamworkhuman-ai collaborationHuman-Agent TeamworkOvercooked AI To reference this document use: http://resolver.tudelft.nl/uuid:ca39cc40-049a-42ce-ba6f-003e5c358351 Part of collection Student theses Document type bachelor thesis Rights © 2022 Nathan Ordonez Cardenas Files PDF research_paper_nathan.pdf 491.39 KB Close viewer /islandora/object/uuid:ca39cc40-049a-42ce-ba6f-003e5c358351/datastream/OBJ/view