FLOODBOT: VISION AND AI ENABLED FLOOD DETECTION SYSTEMS IN URBAN ENVIRONMENT

Author/Creator

Author/Creator ORCID

Date

2022-01-01

Department

Information Systems

Program

Information Systems

Citation of Original Publication

Rights

This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.

Subjects

Abstract

Flash floods are one of the most commonly occurring natural disasters. However, communities are often ill-prepared for its pre-disaster precautions and post disaster aftermath. We argue that the technical and economic resources are significantconstraints in identifying, assessing, and reducing disaster risks. While other mature flood protection mechanisms exist, they are often expensive and site-specific. Expensive flood detection and control mechanisms are often limited to affluent communities, increasing the risk of flood damage to less affluent areas. Our research develops economically viable, scalable, and mobile flash flood detection systems that reduce disaster risks. We explore various state-of-the-art machine learning models, the Internet of Things (IoT), crowd-sourcing, participatory sensing, and cloud infrastructure to deliver social media-based flash flood detection systems called FloodBot. The FloodBot is a scalable, mobile, and end-to-end mass-deployable alternative flash flood detection system based on vision, sound, and social media content. The FloodBot's vision is enabled by state-of-the-art computer vision (CV) techniques; its auditory capabilities are enabled by acoustic scene classification (ASC) techniques, while conversational AI enables its speech processing capabilities. In this thesis, we propose novel multimodal deep learning frameworks and cross-domain transfer learning techniques to classify flood severity, perform object recognition and segmentation. We employ pre-trained transfer learning techniquesto enhance the accuracy of traditional/hand-crafted models and attain 97% accuracy under an ideal scenario. Specifically, we posit deep learning models such as convolutional neural networks (CNNs), single-shot multi-box object detection (SSDs), and segmentation models for vision-based tasks. We augment the vision by sound (Mel-Spectrogram) based deep learning models and classify environmental sounds pertinent to flood during weather adversaries (low light, bad weather). Our experiments using the sound signal and deep learning models attest that we can classify flood-related sound events with a 78% accuracy, even in adverse weather conditions (heavy rain, strong wind). Finally, we demonstrate how memory-based end-to-end pre-trained language models such as Bidirectional Encoder Representations Transformers (BERT) can enable conversational power and seamlessly integrate the FloodBot into social media. In summary, this thesis assesses the relevancy of the multimodal information (image, sound, and social media) and integrate them together to deliver proactive notifications such as tweets to reduce damages and prepare the community during natural disasters. We have deployed FloodBot in Ellicott City – a severe flash flood prone area in Maryland with a live Twitter handle umbc_floodbot in collaboration with Howard County Storm Water Management Division and released more than24 hours of annotated multimodal AI-Ready data (video recordings) in the public domain to foster further research for natural disaster monitoring in the community.