Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models

Author/Creator ORCID

Date

2024-08-14

Department

Program

Citation of Original Publication

Rights

This work was written as part of one of the author's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.
Public Domain

Abstract

Deep neural network (DNN) models have demonstrated impressive performance in various domains, yet their application in cognitive neuroscience is limited due to their lack of interpretability. In this study we employ two structurally different and complementary DNN-based models, a one-dimensional convolutional neural network (1D-CNN) and a bidirectional long short-term memory network (BiLSTM), to classify individual cognitive states from fMRI BOLD data, with a focus on understanding the cognitive underpinnings of the classification decisions. We show that despite the architectural differences, both models consistently produce a robust relationship between prediction accuracy and individual cognitive performance, such that low performance leads to poor prediction accuracy. To achieve model explainability, we used permutation techniques to calculate feature importance, allowing us to identify the most critical brain regions influencing model predictions. Across models, we found the dominance of visual networks, suggesting that task-driven state differences are primarily encoded in visual processing. Attention and control networks also showed relatively high importance, however, default mode and temporal-parietal networks demonstrated negligible contribution in differentiating cognitive states. Additionally, we observed individual trait-based effects and subtle model-specific differences, such that 1D-CNN showed slightly better overall performance, while BiLSTM showed better sensitivity for individual behavior; these initial findings require further research and robustness testing to be fully established. Our work underscores the importance of explainable DNN models in uncovering the neural mechanisms underlying cognitive state transitions, providing a foundation for future work in this domain.