Iterative Random Training Sampling Convolutional Neural Network for Hyperspectral Image Classification
Loading...
Links to Files
Author/Creator
Author/Creator ORCID
Date
2023-05-26
Type of Work
Department
Program
Citation of Original Publication
C. -I. Chang, C. -C. Liang and P. Hu, "Iterative Random Training Sampling Convolutional Neural Network for Hyperspectral Image Classification," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2023.3280205.
Rights
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Subjects
Abstract
Convolutional neural network (CNN) has received considerable interest in hyperspectral image classification (HSIC) lately due to its excellent spectral–spatial feature extraction capability. To improve CNN, many approaches have been directed at exploring the infrastructure of its network by introducing different paradigms. This article takes a rather different approach by developing an iterative CNN that extends a CNN by including a feedback system to repeatedly process the same CNN in an iterative manner. Its idea is to take advantage of a recently developed iterative training sampling spectral–spatial classification (IRTS-SSC) that allows CNN to update its spatial information of classification maps through a feedback spatial fil- tering system via IRTS. The resulting CNN is called iterative random training sampling CNN (IRTS-CNN) with several unique features. First, IRTS-CNN combines CNN and IRTS-SSC into one paradigm, an architecture that has never been investigated in the past. Second, it implements a series of spatial filters to capture spatial information of classified data samples and further feeds this information back via an iterative process to expand the current input data cube for the next iteration. Third, it utilizes the expanded data cube to randomly reselect training samples and then to reimplement CNN iteratively. Last but not least, IRTS-CNN provides a general framework that can implement any arbitrary CNN as an initial classifier to improve its performance through an iterative process. Extensive experiments are conducted to demonstrate that IRTS-CNN indeed significantly improves CNN, specifically when only a small size of limited training samples is used.