Print Email Facebook Twitter Backdoor Attack on gaze estimation Title Backdoor Attack on gaze estimation Author Reda, Yuji (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Du, L. (mentor) Lan, G. (graduation committee) Zhang, X. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2023-06-25 Abstract Badnets are a type of backdoor attack that aims at manipulating the behavior of Convolutional Neural Networks. The training is modified such that when certain triggers appear in the inputs the CNN is going to behave accordingly. In this paper, we apply this type of backdoor attack to a regression task on gaze estimation. We examine different triggers to discover which of them lead to better performance and thus infer which trigger aspects one can take the most advantage from. It turns out that placing frames around and drawing multiple lines across the images are the most effective for the training of Badnets. Subject Gaze estimationConvolutional Neural NetworkBackdoor AttacksBadnets To reference this document use: http://resolver.tudelft.nl/uuid:80cfdc30-f335-41a8-8665-83f92265edc0 Part of collection Student theses Document type bachelor thesis Rights © 2023 Yuji Reda Files PDF CSE3000_report.pdf 5.33 MB Close viewer /islandora/object/uuid:80cfdc30-f335-41a8-8665-83f92265edc0/datastream/OBJ/view