QPRL: Learning Optimal Policies with Quasi-Potential Functions for Asymmetric Traversal
| dc.contributor.author | Hossain, Jumman | |
| dc.contributor.author | Roy, Nirmalya | |
| dc.date.accessioned | 2025-07-30T19:22:17Z | |
| dc.date.issued | 2025-07-15 | |
| dc.description | 42nd International Conference on Machine Learning (ICML) Vancouver, Canada, July 13 - 19, 2025 | |
| dc.description.abstract | Reinforcement learning (RL) in real-world tasks such as robotic navigation often encounters environments with asymmetric traversal costs, where actions like climbing uphill versus moving downhill incur distinctly different penalties, or transitions may become irreversible. While recent quasimetric RL methods relax symmetry assumptions, they typically do not explicitly account for path-dependent costs or provide rigorous safety guarantees. We introduce Quasi-Potential Reinforcement Learning (QPRL), a novel framework that explicitly decomposes asymmetric traversal costs into a path-independent potential function (?) and a path-dependent residual (?). This decomposition allows efficient learning and stable policy optimization via a Lyapunov-based safety mechanism. Theoretically, we prove that QPRL achieves convergence with improved sample complexity of O˜( ? T), surpassing prior quasimetric RL bounds of O˜(T). Empirically, our experiments demonstrate that QPRL attains state-of-the art performance across various navigation and control tasks, significantly reducing irreversible constraint violations by approximately 4× compared to baselines. | |
| dc.description.sponsorship | This work has been partially supported by ONR Grant #N00014-23-1-2119, U.S. Army Grant #W911NF2120076, U.S. Army Grant #W911NF2410367, NSF REU Site Grant #2050999, NSF CNS EAGER Grant #2233879, and NSF CAREER Award #1750936. | |
| dc.description.uri | https://openreview.net/pdf?id=eU8vAuMlpH | |
| dc.format.extent | 18 pages | |
| dc.genre | conference papers and proceedings | |
| dc.identifier | doi:10.13016/m2h9jp-lro6 | |
| dc.identifier.citation | Hossain, Jumman, and Nirmalya Roy. “QPRL: Learning Optimal Policies with Quasi-Potential Functions for Asymmetric Traversal,” May 1, 2025. https://openreview.net/pdf?id=eU8vAuMlpH. | |
| dc.identifier.uri | http://hdl.handle.net/11603/39522 | |
| dc.language.iso | en_US | |
| dc.publisher | ICML 2025 | |
| dc.relation.isAvailableAt | The University of Maryland, Baltimore County (UMBC) | |
| dc.relation.ispartof | UMBC Information Systems Department | |
| dc.relation.ispartof | UMBC Faculty Collection | |
| dc.relation.ispartof | UMBC Student Collection | |
| dc.rights | Attribution 4.0 International | |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | UMBC Mobile, Pervasive and Sensor Computing Lab (MPSC Lab) | |
| dc.title | QPRL: Learning Optimal Policies with Quasi-Potential Functions for Asymmetric Traversal | |
| dc.type | Text | |
| dcterms.creator | https://orcid.org/0009-0009-4461-7604 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 4981_QPRL_Learning_Optimal_Pol.pdf
- Size:
- 1.85 MB
- Format:
- Adobe Portable Document Format
