VISUAL COMPUTATIONAL CONTEXT: USING COMPOSITIONS AND NON-TARGET PIXELS FOR NOVEL CLASS DISCOVERY
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2019-01-01
Type of Work
Department
Computer Science and Electrical Engineering
Program
Computer Science
Citation of Original Publication
Rights
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Abstract
During the deep learning revolution in computer science that has occoured since 2006, two factors have pushed our ability to successfully learn from large-scale data sources: exponential growth in computational power and the size and degree of annotation of our datasets. Modern models loaded in the Graphics Processing Unit (GPU) can fill an entire 12 GB Video Random Access Memory (VRAM) graphics card cache; a training task achievable in weeks that would have taken centuries on CPUs from 10 years ago [1]. The standard computer vision dataset at the time - Mixed National Institute of Standards and Technology (MNIST) - consisted of 70, 000 28 ? 28 pixel grayscale images of 10 class labels. The more recent ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset contains over 15 million full color images with 1, 000 different class labels. During this time however, there has been little growth in contextual use in images. Context can be used to identify target objects that may be obfuscated in the input space as well as confirm or deny the existence of objects based on underlying parts. I use context in two main ways to improve object detection and scene understanding. First, I use location and correlation between objects to infer difficult to see and obfuscated objects [2]. In my second study, I further support the necessity of non-target pixels by using background pixels of the image to aid in classification instead of only other objects in the scene. In addition I use case-based reasoning to detect novel objects that were not seen during training, and classify them with other visually similar objects based on their observable parts. I use this case-based reasoning model in conjuction with a CNN to demonstrate the ability to overcome shortcomings of a traditional deep learned network with case-based reasoning.