A Generic and Extendable Framework for Benchmarking and Assessing the Change Detection Models

dc.contributor.authorHassouna, Ahmed Alaa Abdelbaky
dc.contributor.authorIsmail, Mohamed Badr
dc.contributor.authorAlqahtani, Ali
dc.contributor.authorAlqahtani, Nayef
dc.contributor.authorHassan, Amany Shaban
dc.contributor.authorAshqar, Huthaifa
dc.contributor.authorAlSobeh, Anas M. R.
dc.contributor.authorHassan, Abdallah A.
dc.contributor.authorElhenawy, Mohammed
dc.date.accessioned2024-10-28T14:31:14Z
dc.date.available2024-10-28T14:31:14Z
dc.date.issued2024-03-20
dc.description.abstractChange Detection (CD) of aerial images refers to identifying and analyzing changes between two or more aerial images of the same location taken at different times. The CD is a highly challenging task due to the need to distinguish relevant changes, such as urban expansion, deforestation, or post-disaster damage assessment, from irrelevant ones, such as light conditions, shadows, and seasonal variations. Many CD papers have recently been published, where most of the papers that proposed a new model contained a comparison between their proposed and state-of-the-art (SOTA) models. While many recent studies propose new deep learning (DL) models for improving CD performance, their comparative analyses are often restricted, lacking comprehensive insights into the proposed models' real-world generalizability, robustness, and performance trade-offs across diverse change characteristics. This paper presents a novel generic framework to systematically benchmark and assess DL-based CD models through three parallel pipelines: 1) cross-testing models on diverse benchmark datasets to evaluate generalization, 2) robustness analysis against different image corruptions, and 3) multi-faceted contour-level analytics evaluating detection sensitivity to change size/complexity. The framework is applied to comparatively evaluate five state-of-the-art DL-based CD models - Changeformer, BIT, Tiny, SNUNet, and CSA-CDGAN. Extensive experiments unveil each model's strengths, limitations and biases, highlighting their relative proficiencies in generalizing across data distributions, resilience to noise corruption, and discriminative capabilities for changes of varying characteristics. The proposed benchmarking framework demonstrates significant potential for guiding the selection of suitable CD models tailored to specific application requirements by comprehensively evaluating their generalizability, robustness, and detection capabilities across diverse real-world scenarios. This systematic evaluation approach can drive future research into developing more robust and versatile CD solutions aligned with practical needs.
dc.description.urihttps://www.preprints.org/manuscript/202403.1106/v1
dc.format.extent37 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m24vhv-hswb
dc.identifier.urihttps://doi.org/10.20944/preprints202403.1106.v1
dc.identifier.urihttp://hdl.handle.net/11603/36811
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Data Science
dc.rightsAttribution 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectDeep Learning
dc.subjectAerial Images
dc.subjectBenchmarking
dc.subjectChange Detection
dc.subjectContour Analytics
dc.subjectConvolution Neural Network (CNN)
dc.subjectGeneralization
dc.subjectModel Evaluation
dc.subjectRecurrent Neural Network (RNN)
dc.subjectRemote Sensing
dc.subjectRobustness Analysis
dc.subjectSustainable Development
dc.titleA Generic and Extendable Framework for Benchmarking and Assessing the Change Detection Models
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-6835-8338

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
preprints202403.1106.v1.pdf
Size:
9.65 MB
Format:
Adobe Portable Document Format