Print Email Facebook Twitter Using Sign-Language as an Input Modality for Microtask Crowdsourcing Title Using Sign-Language as an Input Modality for Microtask Crowdsourcing Author Singh, Aayush (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Houben, G.J.P.M. (mentor) Gadiraju, Ujwal (mentor) Broz, F. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science Date 2022-08-26 Abstract Several input types have been developed in different technological landscapes like crowdsourcing and conversational agents. However, sign language remains one of the input types that has not been looked upon. Although numerous amount of people around the world use sign language as their primary language, there have not been many efforts to include them in these technological landscapes. In this thesis, we hope to draw attention to and take a step towards the inclusion of deaf and mute people in microtask crowdsourcing. We identify some of the existing technical and research gaps in the current architectures for Sign Language Recognition/Translation in a real-time setting. Next, we determine various microtasks which can be adapted to use sign language as input, keeping in mind the challenges it introduces. We, then, investigate the effectiveness of a system that uses sign language as input by building a web application - SignUpCrowd - for microtask crowdsourcing, namely Visual Question Answering and Tweet Sentiment Analysis tasks, and comparing it with already prevalent input types such as text and click. This comparison with different popular input types will help understand how much of a difference there is for sign language as input. In addition, it will also show the preference of input types for the particular microtasks. For this, we developed three web applications with different input types and conducted a between-subject experimental study on Prolific wherein a number of workers (N=240) were asked to perform the above-mentioned tasks using sign language, text, and click input. Our results indicate that, in terms of task completion time and task accuracy, sign language as an input modality in microtask crowdsourcing is not significantly different from other, commonly used, input types. We also noticed that people's input type preference for the given microtasks for sign language was more than text input. Although people with no knowledge of sign language found it difficult, this input modality aims at a different target audience. This shows us that there is scope for sign language as an input type for microtask crowdsourcing among people, and paves the way for more efforts for the introduction of sign language in real-world applications. Subject CrowdsourcingSign LanguageInput Modality To reference this document use: http://resolver.tudelft.nl/uuid:d0d6e76e-fb84-479a-b3a4-569b7b5277f9 Bibliographical note https://osf.io/n8pca/?view_only=fc7bf6ab55d6482f83ff2729c25b937f Part of collection Student theses Document type master thesis Rights © 2022 Aayush Singh Files PDF TUD_Thesis_Report_AayushS.pdf 7.47 MB Close viewer /islandora/object/uuid:d0d6e76e-fb84-479a-b3a4-569b7b5277f9/datastream/OBJ/view