Browsing by Subject "Image Classification"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Adversarial Attacks for Network Interpretation(2018-01-01) Pillai, Vipin Radhakrishnan; Pirsiavash, Hamed; Computer Science and Electrical Engineering; Computer ScienceAdversarial attacks are known to fool deep neural networks to produce incorrect predictions. We introduce adversarial attack algorithms that not only fool the network's prediction, but also fool our interpretation of the cause of the network's decision. We show that our algorithms can empower practical adversarial attacks, like adversarial patches, by hiding them from network interpretation tools. We also introduce adversarial attack algorithms which can change the interpretation of the network's decision without changing the network's output. We show that our attack tuned for GradCam visualization transfers directly to other visualization algorithms like CAM and occluding patch as well. We believe our algorithms can facilitate developing more robust network interpretation tools that truly explain the network's underlying decision-making process.Item Applying Backtracking in Hierarchical Classification to Recover from Error Propagation(2022-01-01) Leatherwood, Kathleen; Dutt, Abhijit; Computer Science and Electrical Engineering; Computer ScienceHierarchical classification has been previously demonstrated to yield more ac- curate results in comparison to flat classifiers; however, the tree-like structure intro- duces the problem of error propagation. Error propagation happens when a parent classifier within the hierarchy misclassifies an object. In traditional hierarchical structures, the system cannot recover from such misclassifications. In this paper, we propose a method of backtracking that allows the system to recover from such errors. For each non-root classifier in the structure, we add a new label option, "incorrect”. In the case that an object is classified as "incorrect” we perform back- tracking up the hierarchy and reclassify the object to the next best class from that previous level of the hierarchy. Through experimentation we have found that this method can help the system recover from individual instances of error propagation; however, more work is necessary to find an appropriate weight to give the "incorrect” class that doesn’t cause other errors, decreasing the over-all accuracy.Item Enhanced Lung Nodule Malignancy Suspicion Classifier using Biomarkers, Radiomics and Image Features(2020-01-20) Mehta, Kushal Samir; Chapman, David; Computer Science and Electrical Engineering; Computer ScienceLung cancer is the leading cause of all cancer-related deaths. An early detection of lung cancer by inspecting computed tomography (CT) tends to improve survival rates significantly without the need for invasive procedures. This work aims to strengthen standard image-based deep learning lung nodule malignancy classification models by combining user defined features like biomarkers, nodule size, shape and radiomic features with deep features and obtain a meaningful representation of lung nodules. The National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) IDRI dataset is used to train and evaluate the classification task. We combine 3D deep image features of the nodules with biomarkers, nodule diameter, volume and its radiomic features and train an ensemble model that classifies nodules as malignant or benign. As a result, we aim to reduce false-positive rates among patients and compare our results with current state of the art lung nodule malignancy classification models. This work employs a 3D Convolutional Neural Network as well as a Random Forest to combine the nodule features. Our results show that a combination of deep features and user-defined features outperform image classification with CNN'salone in predicting suspicion levels of lung nodule malignancy using LIDC-IDRI.Item Universal Adversarial Patches(2017-01-01) Patil, Koninika; Pirsiavash, Hamed; Computer Science and Electrical Engineering; Computer ScienceDeep learning algorithms have gained a lot of popularity in recent years due to their state-of-the-art results in computer vision applications. Despite their success, studies have shown that neural networks are vulnerable to attacks via perturbations in input images in various forms, called adversarial examples. Adversarial examples pose a severe security threat because they expose a flaw in machine learning systems. In this theses, we propose a method to generate image-agnostic universal adversarial patches for attacking image classification and object detection using latent contextual information. Our experiments show that for classification, replacing a small part of an image with a universal adversarial patch can cause misclassification of more than 40% images. In object detection, we attack each category of objects individually and the best patch causes approximately 20% images to be misclassified when attacking images of the bird category. We also demonstrate that photos taken of adversarial examples containing the adversarial patch on a cell-phone, can also fool the network. Thus, we show that adversarial examples exist in the physical world which can cause harm to AI-based systems.