Print Email Facebook Twitter Human trustworthiness when collaborating with a friendly agent Title Human trustworthiness when collaborating with a friendly agent: Final paper Author Rademaker, Justin (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Tielman, M.L. (mentor) Ferreira Gomes Centeio Jorge, C. (mentor) Tömen, N. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2022-06-23 Abstract As technology advances, automated systems become more autonomous which leads to a higher interdependence between machine and human. Much research has been done about trust between humans and trust of humans regarding machines. An interesting question that remains is how the behavior of an agent influences human trustworthiness in a human-agent collaborative setting. The research presented by this paper contributes to the understanding of this area. It investigates a specific behavioral trait using the following hypothesis: friendly behavior of an agent improves human trustworthiness. Here, trustworthiness is broken up in the constructs: ability, benevolence and integrity. An experiment has been conducted using a collaborative Search and Rescue game. The following behaviors of the participants have been measured: - Ability: speed and effectiveness;- Benevolence: communication, willingness to help, agreeableness to advice, responsiveness;- Integrity: truthfulness.Furthermore, a likert scale has been used to measure the participants' own perception of their trustworthiness. The experiment is conducted with 20 participants in the control group, where the agent spoke in a neutral manner, and 20 in the experimental group, where the agent instilled empathy, stimulated collaboration, encouraged the participants and was affectionate.The research has shown a significant improvement in the experimental group only for communication and willingness to help. This gives some indication that a friendly agent only slightly improves the trustworthiness of a human. However, the research has some limitations that might also explain the lack of significant results. Firstly, it is unclear to what extent the measures truly measured the constructs of trustworthiness. Secondly, to create a friendly agent, theories from organizational and social psychology are used, which are mostly focussed on human-human relationships, instead of human-agent relationships. Finally, Some confounding variables may have had an impact, like lag in the game and the participant not properly reading the agent’s messages. Subject TrustworthinessArtificial IntelligenceCollaboration To reference this document use: http://resolver.tudelft.nl/uuid:0fa32480-1ced-4631-9ed6-547c75fd5360 Part of collection Student theses Document type bachelor thesis Rights © 2022 Justin Rademaker Files PDF Final_paper_Justin_Rademaker.pdf 657.86 KB Close viewer /islandora/object/uuid:0fa32480-1ced-4631-9ed6-547c75fd5360/datastream/OBJ/view