Universal Adversarial Patches

dc.contributor.advisorPirsiavash, Hamed
dc.contributor.authorPatil, Koninika
dc.contributor.departmentComputer Science and Electrical Engineering
dc.contributor.programComputer Science
dc.date.accessioned2019-10-11T13:43:12Z
dc.date.available2019-10-11T13:43:12Z
dc.date.issued2017-01-01
dc.description.abstractDeep learning algorithms have gained a lot of popularity in recent years due to their state-of-the-art results in computer vision applications. Despite their success, studies have shown that neural networks are vulnerable to attacks via perturbations in input images in various forms, called adversarial examples. Adversarial examples pose a severe security threat because they expose a flaw in machine learning systems. In this theses, we propose a method to generate image-agnostic universal adversarial patches for attacking image classification and object detection using latent contextual information. Our experiments show that for classification, replacing a small part of an image with a universal adversarial patch can cause misclassification of more than 40% images. In object detection, we attack each category of objects individually and the best patch causes approximately 20% images to be misclassified when attacking images of the bird category. We also demonstrate that photos taken of adversarial examples containing the adversarial patch on a cell-phone, can also fool the network. Thus, we show that adversarial examples exist in the physical world which can cause harm to AI-based systems.
dc.genretheses
dc.identifierdoi:10.13016/m2dlkk-36q4
dc.identifier.other11783
dc.identifier.urihttp://hdl.handle.net/11603/15522
dc.languageen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Theses and Dissertations Collection
dc.relation.ispartofUMBC Graduate School Collection
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
dc.sourceOriginal File Name: Patil_umbc_0434M_11783.pdf
dc.subjectAdversarial Examples
dc.subjectConvolution Neural Networks
dc.subjectImage Classification
dc.subjectObject Detection
dc.titleUniversal Adversarial Patches
dc.typeText
dcterms.accessRightsDistribution Rights granted to UMBC by the author.

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Patil_umbc_0434M_11783.pdf
Size:
11.39 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
PatilK_Universal_Open.pdf
Size:
55.92 KB
Format:
Adobe Portable Document Format
Description: