Browsing by Subject "learning (artificial intelligence)"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item AI based approach to identify compromised meters in data integrity attacks on smart grid(IET, 2017-10-02) Khanna, Kush; Panigrahi, Bijaya Ketan; Joshi, AnupamFalse data injection attacks can pose serious threats to the operation and control of power grid. The smarter the power grid gets, the more vulnerable it becomes to cyber attacks. Various detection methods of cyber attacks have been proposed in the literature in recent past. However, to completely alleviate the possibility of cyber threats, the compromised meters must be identified and secured. In this paper, we are presenting an Artificial Intelligence (AI) based identification method to correctly single out the malicious meters. The proposed AI based method successfully identifies the compromised meters by anticipating the correct measurements in the event of the cyber attack. NYISO load data is mapped with the IEEE 14 bus system to validate the proposed method. The efficiency of the proposed method is compared for Artificial Neural Network (ANN) and Extreme Learning Machine (ELM) based AI techniques. It is observed that both the techniques identify the corrupted meters with high accuracy.Item Boosting Self-Supervised Learning via Knowledge Transfer(IEEE, 2018-12-17) Noroozi, Mehdi; Vinjimoor, Ananth; Favaro, Paolo; Pirsiavash, HamedIn self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models including handcrafted features; 2) show that deeper neural network models can learn better representations from the same pretext task; 3) transfer knowledge learned with a deep model to a shallower one and thus boost its learning. We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin. Our learned features shrink the mAP gap between models trained via self-supervised learning and supervised learning from 5.9% to 2.6% in object detection on PASCAL VOC 2007.Item DeepCAMP: Deep Convolutional Action & Attribute Mid-Level Patterns(IEEE, 2016-12-12) Diba, Ali; Pazandeh, Ali Mohammad; Pirsiavash, Hamed; Gool, Luc VanThe recognition of human actions and the determination of human attributes are two tasks that call for fine-grained classification. Indeed, often rather small and inconspicuous objects and features have to be detected to tell their classes apart. In order to deal with this challenge, we propose a novel convolutional neural network that mines mid-level image patches that are sufficiently dedicated to resolve the corresponding subtleties. In particular, we train a newly designed CNN (DeepPattern) that learns discriminative patch groups. There are two innovative aspects to this. On the one hand we pay attention to contextual information in an original fashion. On the other hand, we let an iteration of feature learning and patch clustering purify the set of dedicated patches that we use. We validate our method for action classification on two challenging datasets: PASCAL VOC 2012 Action and Stanford 40 Actions, and for attribute recognition we use the Berkeley Attributes of People dataset. Our discriminative mid-level mining CNN obtains state-of-the-art results on these datasets, without a need for annotations about parts and poses.Item Joint Semantic Segmentation and Depth Estimation with Deep Convolutional Networks(IEEE, 2016-12-19) Mousavian, Arsalan; Pirsiavash, Hamed; Košecká, JanaMulti-scale deep CNNs have been used successfully for problems mapping each pixel to a label, such as depth estimation and semantic segmentation. It has also been shown that such architectures are reusable and can be used for multiple tasks. These networks are typically trained independently for each task by varying the output layer(s) and training objective. In this work we present a new model for simultaneous depth estimation and semantic segmentation from a single RGB image. Our approach demonstrates the feasibility of training parts of the model for each task and then fine tuning the full, combined model on both tasks simultaneously using a single loss function. Furthermore we couple the deep CNN with fully connected CRF, which captures the contextual relationships and interactions between the semantic and depth cues improving the accuracy of the final results. The proposed model is trained and evaluated on NYUDepth V2 dataset [23] outperforming the state of the art methods on semantic segmentation and achieving comparable results on the task of depth estimation.