Print Email Facebook Twitter Towards Trust in Human-AI Teams Title Towards Trust in Human-AI Teams Author Lindhorst, Paul (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Ferreira Gomes Centeio Jorge, C. (mentor) Tielman, M.L. (mentor) Tömen, N. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2022-06-23 Abstract Human-AI teams require trust to operate efficiently and solve certain tasks like search & rescue. Trustworthiness is measured using the ABI model; Ability, Benevolence and Integrity. This research paper tries to observe the effect a conflicting robot has on the human trustworthiness. The hypothesis we try to test is: “human trustworthiness will decrease when paired with a conflicting AI”. We conduct an experiment with one control group playing with a normal agent and an experiment group paired with the conflicting agent. Using the ABI concepts, we model the human trustworthiness across both groups using in-game observations (objective) and a questionnaire (subjective). When comparing the results from both group we see that the conflicting agent does not decrease the objective trustworthiness, however looking at the questionnaires we observe that the subjective human benevolence and integrity are negatively affected when paired with the conflicting agent. Subject AIcollaborationhumantrustTrustworthinessagentsearch and rescueconflictinghuman-ai collaboration To reference this document use: http://resolver.tudelft.nl/uuid:9f045f00-ab28-4e44-ae16-b6f4ee8e63aa Part of collection Student theses Document type bachelor thesis Rights © 2022 Paul Lindhorst Files PDF Lindhorst_P_2022_Towards_Trust.pdf 1.18 MB Close viewer /islandora/object/uuid:9f045f00-ab28-4e44-ae16-b6f4ee8e63aa/datastream/OBJ/view