QuasiNav: Asymmetric Cost-Aware Navigation Planning with Constrained Quasimetric Reinforcement Learning

dc.contributor.authorHossain, Jumman
dc.contributor.authorFaridee, Abu Zaher Md
dc.contributor.authorAsher, Derrik
dc.contributor.authorFreeman, Jade
dc.contributor.authorTrout, Theron
dc.contributor.authorGregory, Timothy
dc.contributor.authorRoy, Nirmalya
dc.date.accessioned2024-12-11T17:02:27Z
dc.date.available2024-12-11T17:02:27Z
dc.date.issued2024-10-22
dc.description.abstractAutonomous navigation in unstructured outdoor environments is inherently challenging due to the presence of asymmetric traversal costs, such as varying energy expenditures for uphill versus downhill movement. Traditional reinforcement learning methods often assume symmetric costs, which can lead to suboptimal navigation paths and increased safety risks in real-world scenarios. In this paper, we introduce QuasiNav, a novel reinforcement learning framework that integrates quasimetric embeddings to explicitly model asymmetric costs and guide efficient, safe navigation. QuasiNav formulates the navigation problem as a constrained Markov decision process (CMDP) and employs quasimetric embeddings to capture directionally dependent costs, allowing for a more accurate representation of the terrain. This approach is combined with adaptive constraint tightening within a constrained policy optimization framework to dynamically enforce safety constraints during learning. We validate QuasiNav across three challenging navigation scenarios-undulating terrains, asymmetric hill traversal, and directionally dependent terrain traversal-demonstrating its effectiveness in both simulated and real-world environments. Experimental results show that QuasiNav significantly outperforms conventional methods, achieving higher success rates, improved energy efficiency, and better adherence to safety constraints.
dc.description.sponsorshipThis work has been supported by U.S. Army Grant #W911NF2120076.
dc.description.urihttp://arxiv.org/abs/2410.16666
dc.format.extent8 pages
dc.genrejournal articles
dc.genrepreprints
dc.identifierdoi:10.13016/m2u64e-x6ma
dc.identifier.urihttps://doi.org/10.48550/arXiv.2410.16666
dc.identifier.urihttp://hdl.handle.net/11603/37070
dc.language.isoen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Information Systems Department
dc.relation.ispartofUMBC Faculty Collection
dc.rightsThis work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
dc.rightsPublic Domain
dc.rights.urihttps://creativecommons.org/publicdomain/mark/1.0/
dc.subjectUMBC Mobile, Pervasive and Sensor Computing Lab (MPSC Lab)
dc.subjectComputer Science - Machine Learning
dc.subjectComputer Science - Robotics
dc.titleQuasiNav: Asymmetric Cost-Aware Navigation Planning with Constrained Quasimetric Reinforcement Learning
dc.typeText
dcterms.creatorhttps://orcid.org/0009-0009-4461-7604
dcterms.creatorhttps://orcid.org/0000-0002-8324-1197

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2410.16666v1.pdf
Size:
12.67 MB
Format:
Adobe Portable Document Format