• Login
    View Item 
    •   Maryland Shared Open Access Repository Home
    • ScholarWorks@UMBC
    • UMBC College of Engineering and Information Technology
    • UMBC Computer Science and Electrical Engineering Department
    • View Item
    •   Maryland Shared Open Access Repository Home
    • ScholarWorks@UMBC
    • UMBC College of Engineering and Information Technology
    • UMBC Computer Science and Electrical Engineering Department
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Towards Hiding Adversarial Examples from Network Interpretation

    Thumbnail
    Files
    1812.02843.pdf (7.393Mb)
    Links to Files
    https://arxiv.org/abs/1812.02843
    Permanent Link
    http://hdl.handle.net/11603/14342
    Collections
    • UMBC Computer Science and Electrical Engineering Department
    • UMBC Faculty Collection
    Metadata
    Show full item record
    Author/Creator
    Subramanya, Akshayvarun
    Pillai, Vipin
    Pirsiavash, Hamed
    Date
    2018-12-06
    Type of Work
    10 pages
    Text
    conference papers and proceedings preprints
    Citation of Original Publication
    Akshayvarun Subramanya, Vipin Pillai, Hamed Pirsiavash, Towards Hiding Adversarial Examples from Network Interpretation, Computer Vision and Pattern Recognition , 2018, https://arxiv.org/abs/1812.02843
    Rights
    This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
    Subjects
    adversarial attack algorithms
    deep networks
    network Interpretation
    Abstract
    Deep networks have been shown to be fooled rather easily using adversarial attack algorithms. Practical methods such as adversarial patches have been shown to be extremely effective in causing misclassification. However, these patches can be highlighted using standard network interpretation algorithms, thus revealing the identity of the adversary. We show that it is possible to create adversarial patches which not only fool the prediction, but also change what we interpret regarding the cause of prediction. We show that our algorithms can empower adversarial patches, by hiding them from network interpretation tools. We believe our algorithms can facilitate developing more robust network interpretation tools that truly explain the network’s underlying decision making process.


    Albin O. Kuhn Library & Gallery
    University of Maryland, Baltimore County
    1000 Hilltop Circle
    Baltimore, MD 21250
    www.umbc.edu/scholarworks

    Contact information:
    Email: scholarworks-group@umbc.edu
    Phone: 410-455-3544


    If you wish to submit a copyright complaint or withdrawal request, please email mdsoar-help@umd.edu.

     

     

    My Account

    LoginRegister

    Browse

    This CollectionBy Issue DateTitlesAuthorsSubjectsType

    Statistics

    View Usage Statistics


    Albin O. Kuhn Library & Gallery
    University of Maryland, Baltimore County
    1000 Hilltop Circle
    Baltimore, MD 21250
    www.umbc.edu/scholarworks

    Contact information:
    Email: scholarworks-group@umbc.edu
    Phone: 410-455-3544


    If you wish to submit a copyright complaint or withdrawal request, please email mdsoar-help@umd.edu.