VISUAL COMPUTATIONAL CONTEXT: USING COMPOSITIONS AND NON-TARGET PIXELS FOR NOVEL CLASS DISCOVERY

dc.contributor.advisorOates, Tim
dc.contributor.authorTurner, JT
dc.contributor.departmentComputer Science and Electrical Engineering
dc.contributor.programComputer Science
dc.date.accessioned2021-09-01T13:55:57Z
dc.date.available2021-09-01T13:55:57Z
dc.date.issued2019-01-01
dc.description.abstractDuring the deep learning revolution in computer science that has occoured since 2006, two factors have pushed our ability to successfully learn from large-scale data sources: exponential growth in computational power and the size and degree of annotation of our datasets. Modern models loaded in the Graphics Processing Unit (GPU) can fill an entire 12 GB Video Random Access Memory (VRAM) graphics card cache; a training task achievable in weeks that would have taken centuries on CPUs from 10 years ago [1]. The standard computer vision dataset at the time - Mixed National Institute of Standards and Technology (MNIST) - consisted of 70, 000 28 ? 28 pixel grayscale images of 10 class labels. The more recent ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset contains over 15 million full color images with 1, 000 different class labels. During this time however, there has been little growth in contextual use in images. Context can be used to identify target objects that may be obfuscated in the input space as well as confirm or deny the existence of objects based on underlying parts. I use context in two main ways to improve object detection and scene understanding. First, I use location and correlation between objects to infer difficult to see and obfuscated objects [2]. In my second study, I further support the necessity of non-target pixels by using background pixels of the image to aid in classification instead of only other objects in the scene. In addition I use case-based reasoning to detect novel objects that were not seen during training, and classify them with other visually similar objects based on their observable parts. I use this case-based reasoning model in conjuction with a CNN to demonstrate the ability to overcome shortcomings of a traditional deep learned network with case-based reasoning.
dc.formatapplication:pdf
dc.genredissertations
dc.identifierdoi:10.13016/m2qzna-rpc8
dc.identifier.other12046
dc.identifier.urihttp://hdl.handle.net/11603/22930
dc.languageen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Theses and Dissertations Collection
dc.relation.ispartofUMBC Graduate School Collection
dc.relation.ispartofUMBC Student Collection
dc.sourceOriginal File Name: Turner_umbc_0434D_12046.pdf
dc.subjectCase-Based Reasoning
dc.subjectCNN
dc.subjectComputer Vision
dc.subjectContext
dc.subjectDeep Learning
dc.subjectMachine Learning
dc.titleVISUAL COMPUTATIONAL CONTEXT: USING COMPOSITIONS AND NON-TARGET PIXELS FOR NOVEL CLASS DISCOVERY
dc.typeText
dcterms.accessRightsAccess limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
dcterms.accessRightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Turner_umbc_0434D_12046.pdf
Size:
2.04 MB
Format:
Adobe Portable Document Format