MUMOSA, Interactive Dashboard for MUlti-MOdal Situation Awareness
Loading...
Links to Files
Author/Creator ORCID
Date
2024-11
Type of Work
Department
Program
Citation of Original Publication
Lukin, Stephanie M., Shawn Bowser, Reece Suchocki, Douglas Summers-Stay, Francis Ferraro, Cynthia Matuszek, and Clare Voss. “MUMOSA, Interactive Dashboard for MUlti-MOdal Situation Awareness.” In Proceedings of the Workshop on the Future of Event Detection (FuturED), edited by Joel Tetreault, Thien Huu Nguyen, Hemank Lamba, and Amanda Hughes, 32–47. Miami, Florida, USA: Association for Computational Linguistics, 2024. https://aclanthology.org/2024.futured-1.4.
Rights
This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain
Public Domain
Subjects
Abstract
Information extraction has led the way for event detection from text for many years. Recent advances in neural models, such as Large Language Models (LLMs) and Vision-Language Models (VLMs), have enabled the integration of multiple modalities, providing richer sources of information about events. Concurrently, the development of schema graphs and 3D reconstruction methods has enhanced our ability to visualize and annotate complex events. Building on these innovations, we introduce the MUMOSA (MUlti-MOdal Situation Awareness) interactive dashboard that brings these diverse resources together. MUMOSA aims to provide a comprehensive platform for event situational awareness, offering users a powerful tool for understanding and analyzing complex scenarios across modalities.