3-D Cloud Masking Across a Broad Swath using Multi-angle Polarimetry and Deep Learning

Date

2024-01-19

Department

Program

Citation of Original Publication

Foley, Sean R., Kirk D. Knobelspiesse, Andrew M. Sayer, Meng Gao, James Hays, and Judy Hoffman. “3-D Cloud Masking Across a Broad Swath Using Multi-Angle Polarimetry and Deep Learning.” EGUsphere, January 19, 2024, 1–24. https://doi.org/10.5194/egusphere-2023-2392.

Rights

This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain Mark 1.0 Universal

Subjects

Abstract

Understanding the 3-dimensional structure of clouds is of crucial importance to modeling our changing climate. Active sensors, such as radar and lidar, provide accurate vertical cloud profiles, but are mostly restricted to along-track sampling. Passive sensors can capture a wide swath, but struggle to see beneath cloud tops. In essence, both types of products are restricted to two dimensions: as a cross-section in the active case, and an image in the passive case. However, multi-angle sensor configurations contain implicit information about 3D structure, due to parallax and atmospheric path differences. Extracting that implicit information can be challenging, requiring computationally expensive radiative transfer techniques that must make limiting assumptions. Machine learning, as an alternative, may be able to capture some of the complexity of a full 3D radiative transfer solution with significantly less computational expense. In this work, we make three contributions towards understanding 3D cloud structure from multi-angle polarimetry. First, we introduce a large-scale, open-source dataset that fuses existing cloud products into a format more amenable to machine learning. This dataset treats multi-angle polarimetry as an input, and radar-based vertical cloud profiles as an output. Second, we describe and evaluate strong baseline machine learning models based that predict these profiles from the passive imagery. Notably, these models are trained only on center-swath labels, but can predict cloud profiles over the entire passive imagery swath. Third, we leverage the information-theoretic nature of machine learning to draw conclusions about the relative utility of various sensor configurations, including spectral channels, viewing angles, and polarimetry. These findings have implications for Earth-observing missions such as NASA's Plankton, Aerosol, Cloud-ocean Ecosystem (PACE) and Atmosphere Observing System (AOS) missions, as well as in informing future applications of computer vision to atmospheric remote sensing.