Harnessing the Power of Explanations for Incremental Training: A LIME-Based Approach

dc.contributor.authorMazumder, Arnab
dc.contributor.authorLyons, Niall
dc.contributor.authorPandey, Ashutosh
dc.contributor.authorSantra, Avik
dc.contributor.authorMohsenin, Tinoosh
dc.date.accessioned2023-11-02T13:48:57Z
dc.date.available2023-11-02T13:48:57Z
dc.date.issued2023-07-11
dc.descriptionHarnessing the Power of Explanations for Incremental Training: A LIME-Based Approach; Helsinki, Finland; 5 September 2023
dc.description.abstractExplainability of neural network prediction is essential to understand feature importance and gain interpretable insight into neural network performance. However, explanations of neural network outcomes are mostly limited to visualization, and there is scarce work that looks to use these explanations as feedback to improve model performance. In this work, model explanations are fed back to the feed-forward training to help the model generalize better. To this extent, a custom weighted loss where the weights are generated by considering the Euclidean distances between true LIME (Local Interpretable Model-Agnostic Explanations) explanations and model-predicted LIME explanations is proposed. Also, in practical training scenarios, developing a solution that can help the model learn sequentially without losing information on previous data distribution is imperative due to the unavailability of all the training data at once. Thus, the framework incorporates the custom weighted loss with Elastic Weight Consolidation (EWC) to maintain performance in sequential testing sets. The proposed custom training procedure results in a consistent enhancement of accuracy ranging from 0.5% to 1.5% throughout all phases of the incremental learning setup compared to traditional loss-based training methods for the keyword spotting task using the Google Speech Commands dataset.en_US
dc.description.urihttps://arxiv.org/abs/2211.01413en_US
dc.format.extent5 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepreprintsen_US
dc.identifierdoi:10.13016/m2xwem-qmgv
dc.identifier.urihttps://doi.org/10.48550/arXiv.2211.01413
dc.identifier.urihttp://hdl.handle.net/11603/30484
dc.language.isoen_USen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.relation.ispartofUMBC Information Systems Department
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.*
dc.rightsAttribution 4.0 International (CC BY 4.0 DEED)*
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/*
dc.titleHarnessing the Power of Explanations for Incremental Training: A LIME-Based Approachen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-9550-7917en_US
dcterms.creatorhttps://orcid.org/0000-0001-5551-2124en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2211.01413.pdf
Size:
628.54 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: