A Simplified 2D-3D CNN Architecture for Hyperspectral Image Classification Based on Spatial–Spectral Fusion

dc.contributor.authorYu, Chunyan
dc.contributor.authorHan, Rui
dc.contributor.authorSong, Meiping
dc.contributor.authorLiu, Caiyu
dc.contributor.authorChang, Chein-I
dc.date.accessioned2022-11-09T18:03:15Z
dc.date.available2022-11-09T18:03:15Z
dc.date.issued2020-04-27
dc.description.abstractConvolutional neural networks (CNN) have led to a successful breakthrough for hyperspectral image classification (HSIC). Due to the intrinsic spatial-spectral specificities of a hyperspectral cube, feature extraction with 3-D convolution operation is a straightforward way for HSIC. However, the overwhelming features obtained from the original 3-D CNN network suffers from the overfitting and more training cost problem. To address this issue, in this article, a novel HSIC framework based on a simplified 2D-3D CNN is implemented by the cooperation between a 2-D CNN and a 3-D convolution layer. First, the 2-D convolution block aims to extract the spatial features abundantly involved spectral information as a training channel. Then, the 3-D CNN approach primarily concentrates on exploiting band co-relation data by using a reduced kernel. The proposed architecture achieves the spatial and spectral features simultaneously based on a joint 2D-3D pattern to achieve superior fused feature for the subsequent classification. Furthermore, a deconvolution layer intends to enhance the robustness of the deep features is utilized in the proposed CNN network. The results and analysis of extensive real HSIC experiments demonstrate that the proposed light-weighted 2D-3D CNN network can effectively extract refined features and improve the classification accuracy.en_US
dc.description.sponsorshipThe work was supported in part by the National Nature Science Foundation of Liaoning Province under Grant 20170540095, in part by the Fundamental Research Funds for Central Universities under Grant 3132019341, in part by the Recruitment Program of Global Experts for National Science and Technology Major Project, State Administration of Foreign Experts Affairs funded by ZD20180073, and in part by the National Nature Science Foundation of China under Grant 61601077, Grant 61801075, Grant 61971082, and Grant 41801231.en_US
dc.description.urihttps://ieeexplore.ieee.org/document/9078778en_US
dc.format.extent17 pagesen_US
dc.genrejournal articlesen_US
dc.identifierdoi:10.13016/m2vpaq-btu3
dc.identifier.citationC. Yu, R. Han, M. Song, C. Liu and C. -I. Chang, "A Simplified 2D-3D CNN Architecture for Hyperspectral Image Classification Based on Spatial–Spectral Fusion," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 2485-2501, 2020, doi: 10.1109/JSTARS.2020.2983224.en_US
dc.identifier.urihttps://doi.org/10.1109/JSTARS.2020.2983224
dc.identifier.urihttp://hdl.handle.net/11603/26286
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.titleA Simplified 2D-3D CNN Architecture for Hyperspectral Image Classification Based on Spatial–Spectral Fusionen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-5450-4891en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
A_Simplified_2D-3D_CNN_Architecture_for_Hyperspectral_Image_Classification_Based_on_SpatialSpectral_Fusion.pdf
Size:
10.05 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: