Preventing Poisoning Attacks On AI Based Threat Intelligence Systems
Loading...
Files
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2019-12-05
Type of Work
Department
Program
Citation of Original Publication
N. Khurana, S. Mittal, A. Piplai and A. Joshi, "Preventing Poisoning Attacks On AI Based Threat Intelligence Systems," 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP), Pittsburgh, PA, USA, 2019, pp. 1-6, doi: 10.1109/MLSP.2019.8918803.
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Subjects
Abstract
As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we use an ensembled semi-supervised approach to determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.