MUMOSA, Interactive Dashboard for MUlti-MOdal Situation Awareness

dc.contributor.authorLukin, Stephanie M.
dc.contributor.authorBowser, Shawn
dc.contributor.authorSuchocki, Reece
dc.contributor.authorSummers-Stay, Douglas
dc.contributor.authorFerraro, Francis
dc.contributor.authorMatuszek, Cynthia
dc.contributor.authorVoss, Clare
dc.date.accessioned2024-12-11T17:02:38Z
dc.date.available2024-12-11T17:02:38Z
dc.date.issued2024-11
dc.descriptionProceedings of the Workshop on the Future of Event Detection (FuturED),November 15, 2024, Miami, Florida, USA
dc.description.abstractInformation extraction has led the way for event detection from text for many years. Recent advances in neural models, such as Large Language Models (LLMs) and Vision-Language Models (VLMs), have enabled the integration of multiple modalities, providing richer sources of information about events. Concurrently, the development of schema graphs and 3D reconstruction methods has enhanced our ability to visualize and annotate complex events. Building on these innovations, we introduce the MUMOSA (MUlti-MOdal Situation Awareness) interactive dashboard that brings these diverse resources together. MUMOSA aims to provide a comprehensive platform for event situational awareness, offering users a powerful tool for understanding and analyzing complex scenarios across modalities.
dc.description.urihttps://aclanthology.org/2024.futured-1.4
dc.format.extent16 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m20cmm-vwtb
dc.identifier.citationLukin, Stephanie M., Shawn Bowser, Reece Suchocki, Douglas Summers-Stay, Francis Ferraro, Cynthia Matuszek, and Clare Voss. “MUMOSA, Interactive Dashboard for MUlti-MOdal Situation Awareness.” In Proceedings of the Workshop on the Future of Event Detection (FuturED), edited by Joel Tetreault, Thien Huu Nguyen, Hemank Lamba, and Amanda Hughes, 32–47. Miami, Florida, USA: Association for Computational Linguistics, 2024. https://aclanthology.org/2024.futured-1.4.
dc.identifier.urihttps://doi.org/10.18653/v1/2024.futured-1.4
dc.identifier.urihttp://hdl.handle.net/11603/37089
dc.language.isoen_US
dc.publisherAssociation for Computational Linguistics
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
dc.rightsPublic Domain
dc.rights.urihttps://creativecommons.org/publicdomain/mark/1.0/
dc.titleMUMOSA, Interactive Dashboard for MUlti-MOdal Situation Awareness
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0003-1383-8120

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2024.futured1.4.pdf
Size:
3.47 MB
Format:
Adobe Portable Document Format