Bai, HaishiDujmovic, JozoWang, Jianwu2025-06-172025-06-172025-05-22https://doi.org/10.48550/arXiv.2505.14510http://hdl.handle.net/11603/38908As machine learning models and autonomous agents are increasingly deployed in high-stakes, real-world domains such as healthcare, security, finance, and robotics, the need for transparent and trustworthy explanations has become critical. To ensure end-to-end transparency of AI decisions, we need models that are not only accurate but also fully explainable and human-tunable. We introduce BACON, a novel framework for automatically training explainable AI models for decision making problems using graded logic. BACON achieves high predictive accuracy while offering full structural transparency and precise, logic-based symbolic explanations, enabling effective human-AI collaboration and expert-guided refinement. We evaluate BACON with a diverse set of scenarios: classic Boolean approximation, Iris flower classification, house purchasing decisions and breast cancer diagnosis. In each case, BACON provides high-performance models while producing compact, human-verifiable decision logic. These results demonstrate BACON's potential as a practical and principled approach for delivering crisp, trustworthy explainable AI.21 pagesen-USAttribution 4.0 Internationalhttps://creativecommons.org/licenses/by/4.0/UMBC Big Data Analytics LabComputer Science - Artificial IntelligenceComputer Science - Machine LearningBACON: A fully explainable AI model with graded logic for decision making problemsText