ONE-SHOT LEARNING WITHOUT TRANSFER DATA USING DEFORMABLE MESH AUGMENTATIONS

Author/Creator

Author/Creator ORCID

Date

2022-01-01

Department

Computer Science and Electrical Engineering

Program

Computer Science

Citation of Original Publication

Rights

This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan through a local library, pending author/copyright holder's permission.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.

Subjects

Abstract

We present a novel nearest neighbor classifier that attempts to evaluate a near infinite training set through state space search of high dimensional elastic data augmentation space. The classifier demonstrates a large improvement over a baseline nearest neighbor classifier at one-shot learning of the MNIST dataset, with ensemble classification accuracy comparable to that of modern techniques using neural networks and transfer learning from other character datasets. The proposed technique implements a state space search during classification that enables the classifier of search pruning and heuristics to determine the nonlinear registration between a pair of images by means of elastic transformation. Further we discuss the empirical behavior of the classifier and similarities with the theoretical properties of the infinite sample 1NN classifier. We address and resolve the following three limitations of CNNs (a) the requirement of large dataset to process and train the neural network (b) longer training time in case of multiple layers (c) slower processing due to multiple operations such as maxpool. We experiment a 10-way one-shot classification task on the 10 digits of MNIST dataset and carried out 400 one-shot trials on the MNIST test set without any fine tuning on the training. We also evaluate nearest neighbor baseline, Convolutional Siamese Network performance, and performance of neural networks with only single image from each class as training data for this task. To our knowledge, our classifier is one of the best compared to baseline classifiers, neural networks, and other transfer learning approaches. This experiment throws light on the problem of image classification using one-shot learning without making use of transfer data, or with a very few pre-computed training dataset.