Print Email Facebook Twitter An Empirical Look at Gradient-based Black-box Adversarial Attacks on Deep Neural Networks Using One-point Residual Estimates Title An Empirical Look at Gradient-based Black-box Adversarial Attacks on Deep Neural Networks Using One-point Residual Estimates Author Jansen, Joost (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Roos, S. (mentor) Huang, J. (mentor) Hong, C. (mentor) Lan, G. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2022-06-18 Abstract In recent years, there has been a great deal of studies about the optimisation of generating adversarial examples for Deep Neural Networks (DNNs) in a black-box environment. The use of gradient-based techniques to get the adversarial images in a minimal amount of input-output correspondence with the attacked model has been extensively studied. However, existing studies have not been discussing the effect of different gradient estimation techniques coherently. In this paper, a new one-point residual estimate is compared to the known two-point estimates. The findings in this paper show that the one-point residual estimate is not a viable option to decrease the number of queries to the attacked model. The accuracy of the attacks with the use of an one-point residual estimate maintains the same for weaker models. For stronger models, there is a slight decrease in accuracy at identical distortion levels. All estimates are tested on different PGD attacks on the MNIST and F-MNIST datasets using a 3-layer convolutional network. To reference this document use: http://resolver.tudelft.nl/uuid:ecd36e2c-3a39-4561-98f3-b7a453e733c6 Part of collection Student theses Document type bachelor thesis Rights © 2022 Joost Jansen Files PDF Final_paper.pdf 1.34 MB Close viewer /islandora/object/uuid:ecd36e2c-3a39-4561-98f3-b7a453e733c6/datastream/OBJ/view