3D cloud masking across a broad swath using multi-angle polarimetry and deep learning

dc.contributor.authorFoley, Sean R.
dc.contributor.authorKnobelspiesse, Kirk D.
dc.contributor.authorSayer, Andrew
dc.contributor.authorGao, Meng
dc.contributor.authorHays, James
dc.contributor.authorHoffman, Judy
dc.date.accessioned2024-02-06T20:21:52Z
dc.date.available2024-02-06T20:21:52Z
dc.date.issued2024-12-16
dc.description.abstractUnderstanding the 3D structure of clouds is of crucial importance to modeling our changing climate. Both active and passive sensors are restricted to two dimensions: as a cross-section in the active case and an image in the passive case. However, multi-angle sensor configurations contain implicit information about 3D structure, due to parallax and atmospheric path differences. Extracting that implicit information requires computationally expensive radiative transfer techniques. Machine learning, as an alternative, may be able to capture some of the complexity of a full 3D radiative transfer solution with significantly less computational expense. In this work, we develop a machine-learning model that predicts radar-based vertical cloud profiles from multi-angle polarimetric imagery. Notably, these models are trained only on center swath labels but can predict cloud profiles over the entire passive imagery swath. We compare with strong baselines and leverage the information–theoretic nature of machine learning to draw conclusions about the relative utility of various sensor configurations, including spectral channels, viewing angles, and polarimetry. Our experiments show that multi-angle sensors can recover surprisingly accurate vertical cloud profiles, with the skill strongly related to the number of viewing angles and spectral channels, with more angles yielding high performance, and with the oxygen A band strongly influencing skill. A relatively simple convolutional neural network shows nearly identical performance to the much more complicated U-Net architecture. The model also demonstrates relatively lower skill for multilayer clouds, horizontally small clouds, and low-altitude clouds over land, while being surprisingly accurate for tall cloud systems. These findings have promising implications for the utility of multi-angle sensors on Earth-observing systems, such as NASA's Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) and Atmosphere Observing System (AOS), and encourage future applications of computer vision to atmospheric remote sensing.
dc.description.urihttps://amt.copernicus.org/articles/17/7027/2024/
dc.format.extent21 pages
dc.genrejournal articles
dc.identifier.citationFoley, Sean R., Kirk D. Knobelspiesse, Andrew M. Sayer, Meng Gao, James Hays, and Judy Hoffman. “3D Cloud Masking across a Broad Swath Using Multi-Angle Polarimetry and Deep Learning.” Atmospheric Measurement Techniques 17, no. 24 (2024): 7027–47. https://doi.org/10.5194/amt-17-7027-2024.
dc.identifier.urihttps://doi.org/10.5194/amt-17-7027-2024
dc.identifier.urihttp://hdl.handle.net/11603/31569
dc.language.isoen_US
dc.publisherEGU
dc.publisherEGU
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC GESTAR II Collection
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
dc.rightsPublic Domainen
dc.rights.urihttps://creativecommons.org/publicdomain/mark/1.0/
dc.title3D cloud masking across a broad swath using multi-angle polarimetry and deep learning
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0001-9149-1789

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
amt-17-7027-2024.pdf
Size:
4.64 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: