Iterative Random Training Sampling Approaches to Hyperspectral Image Classification with and without Background

dc.contributor.advisorChang, Chein-I CC Hu, Peter PH
dc.contributor.authorLiang, Chia-Chen
dc.contributor.departmentComputer Science and Electrical Engineering
dc.contributor.programEngineering, Electrical
dc.date.accessioned2025-07-18T17:08:30Z
dc.date.issued2025-01-01
dc.description.abstractAs one of fundamental tasks in remote sensing, hyperspectral image classification (HSIC) has attracted considerable interest. Among existing techniques, 3D Convolutional Neural Networks (3D-CNNs) are considered as powerful spectral-spatial classification (SSC) approaches due to their ability in automatically learning hierarchical spectral-spatial features. However, their performance depends heavily on large labeled datasets and requires significant computational resources. In addition, the background (BKG) issue in HSIC has been largely overlooked, despite being a key challenge in real-world applications. The first contribution of this dissertation is the development of a novel deep learning framework, named Iterative Random Training Sampling Convolutional Neural Network (IRTS-CNN). This framework integrates iterative random training sampling spectral-spatial classification (IRTS-SSC) with CNNs, allowing CNN models to iteratively update spatial information through a feedback-based spatial filtering mechanism. IRTS-CNN serves as a generalizable structure that can enhance any baseline CNN classifier through an iterative refinement process. Experimental results show that IRTS-CNN significantly improves CNN classification performance, especially when the training sample size is small. To overcome the computational inefficiencies of deep learning methods, the second contribution made in this dissertation introduces a concept of the Iterative Gaussian-Laplacian Pyramid Network (IGLPN). A traditional CNN consists of a series of feedforward layers which is composed of convolutional (CL) and pooling (PL) sublayers. This architecture can be interpreted through a Gaussian Pyramid (GP), where each layer uses low-pass filtering and downsampling. Additionally, a Laplacian Pyramid (LP) can be constructed to capture differential information between consecutive layers. IGLPN leverages these structures to realize CNN functionality more efficiently. This structure reduces computational demands while improving classification accuracy. In the past, most of HSIC methods assume BKG is removed by using ground truth (GT),. Unfortunately, this is impractical in real-world scenarios. HSIC that perform well without background (HSIC-NB) typically do not work well when BKG is included. To address this, a third contribution made in this dissertation develops a new approach to HSIC with background included (HSIC-B), called Hierarchical One-Class Detection (HOCD) which extends One-Class Detection (OCD) to a hierarchical framework guided by Class Classification Priority (CCP). Experiments demonstrate that HOCD achieves robust accuracy with minimal degradation in the presence of background. These three contributions advance HSIC by enhancing adaptability to limited labeled data samples, reducing resource demands, and addressing background challenges, as validated through extensive experiments.
dc.formatapplication:pdf
dc.genredissertation
dc.identifierdoi:10.13016/m2dxuh-yoi5
dc.identifier.other13067
dc.identifier.urihttp://hdl.handle.net/11603/39405
dc.languageen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Theses and Dissertations Collection
dc.relation.ispartofUMBC Graduate School Collection
dc.relation.ispartofUMBC Student Collection
dc.sourceOriginal File Name: Liang_umbc_0434D_13067.pdf
dc.titleIterative Random Training Sampling Approaches to Hyperspectral Image Classification with and without Background
dc.typeText
dcterms.accessRightsDistribution Rights granted to UMBC by the author.
dcterms.accessRightsThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liang_umbc_0434D_13067.pdf
Size:
7.55 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liang-Chia-Chen_Open.pdf
Size:
247.93 KB
Format:
Adobe Portable Document Format
Description: