UMBC Center for Accelerated Real Time Analysis
Permanent URI for this collectionhttp://hdl.handle.net/11603/26415
Real time analytics is the leading edge of a smart data revolution, pushed by Internet advances in sensor hardware on one side and AI/ML streaming acceleration on the other. Center for Accelerated Real Time Analytics (CARTA) explores the realm streaming applications of Magna Analytics. The center works with next-generation hardware technologies, like the IBM Minsky with onboard GPU accelerated processors and Flash RAM, a Smart Cyber Physical Sensor Systems to build Cognitive Analytics systems and Active storage devices for real time analytics. This will lead to the automated ingestion and simultaneous analytics of Big Datasets generated in various domains including Cyberspace, Healthcare, Internet of Things (IoT) and the Scientific arena, and the creation of self learning, self correcting “smart” systems.
Browse
Recent Submissions
Item BACON: A fully explainable AI model with graded logic for decision making problems(2025-05-22) Bai, Haishi; Dujmovic, Jozo; Wang, JianwuAs machine learning models and autonomous agents are increasingly deployed in high-stakes, real-world domains such as healthcare, security, finance, and robotics, the need for transparent and trustworthy explanations has become critical. To ensure end-to-end transparency of AI decisions, we need models that are not only accurate but also fully explainable and human-tunable. We introduce BACON, a novel framework for automatically training explainable AI models for decision making problems using graded logic. BACON achieves high predictive accuracy while offering full structural transparency and precise, logic-based symbolic explanations, enabling effective human-AI collaboration and expert-guided refinement. We evaluate BACON with a diverse set of scenarios: classic Boolean approximation, Iris flower classification, house purchasing decisions and breast cancer diagnosis. In each case, BACON provides high-performance models while producing compact, human-verifiable decision logic. These results demonstrate BACON's potential as a practical and principled approach for delivering crisp, trustworthy explainable AI.Item Facial Expression Recognition with an Efficient Mix Transformer for Affective Human-Robot Interaction(IEEE, 2025-05-07) Safavi, Farshad; Patel, Kulin; Vinjamuri, RamanaEmotion recognition can significantly enhance interactions between humans and robots, particularly in shared tasks and collaborative processes. Facial Expression Recognition (FER) allows affective robots to adapt their behavior in a socially appropriate manner. However, the potential of efficient Transformers for FER remains underexplored. Additionally, leveraging self-attention mechanisms to create segmentation masks that accentuate facial landmarks for improved accuracy has not been fully investigated. Furthermore, current FER methods lack computational efficiency and scalability, limiting their applicability in real-time scenarios. Therefore, we developed the robust, scalable, and generalizable EmoFormer model, incorporating an efficient Mix Transformer block along with a novel fusion block. Our approach scales across a range of models from EmoFormer-B0 to EmoFormer-B2. The main innovation lies in the fusion block, which uses element-wise multiplication of facial landmarks to emphasize their role in the feature map. This integration of local and global attention creates powerful representations. The efficient self-attention mechanism within the Mix Transformer establishes connections among various facial regions. It enhances efficiency while maintaining accuracy in emotion classification from facial landmarks. We evaluated our approach for both categorical and dimensional facial expression recognition on four datasets: FER2013, AffectNet-7, AffectNet-8, and DEAP. Our ensemble method achieved state-of-the-art results, with accuracies of 77.35% on FER2013, 67.71% on AffectNet-7, and 65.14% on AffectNet-8. For the DEAP dataset, our method achieved 98.07% accuracy for arousal and 97.86% for valence, demonstrating the robustness and generalizability of our models. As an application of our method, we implemented EmoFormer in an affective robotic arm, enabling the human-robot interaction system to adjust its speed based on the user's facial expressions. This was validated through a user experiment with six subjects, demonstrating the feasibility and effectiveness of our approach in creating emotionally intelligent human-robot interactions. Overall, our results demonstrate that EmoFormer is a robust, efficient, and scalable solution for FER, with significant potential for advancing human-robot interaction through emotion-aware robotics.Item Causal Feedback Discovery using Convergence Cross Mapping from Sea Ice Data(2025-05-13) Nji, Francis Ndikum; Mostafa, Seraj Al Mahmud; Wang, JianwuThe Arctic region is experiencing accelerated warming, largely driven by complex and nonlinear interactions among time series atmospheric variables such as, sea ice extent, short-wave radiation, temperature, and humidity. These interactions significantly alter sea ice dynamics and atmospheric conditions, leading to increased sea ice loss. This loss further intensifies Arctic amplification and disrupts weather patterns through various feedback mechanisms. Although stochastic methods such as Granger causality, PCMCI, and VarLiNGAM estimate causal interactions among atmospheric variables, they are limited to unidirectional causal relationships and often miss weak causal interactions and feedback loops in nonlinear settings. In this study, we show that Convergent Cross Mapping (CCM) can effectively estimate nonlinear causal coupling, identify weak interactions and causal feedback loops among atmospheric variables. CCM employs state space reconstruction (SSR) which makes it suitable for complex nonlinear dynamic systems. While CCM has been successfully applied to a diverse range of systems, including fisheries and online social networks, its application in climate science is under-explored. Our results show that CCM effectively uncovers strong nonlinear causal feedback loops and weak causal interactions often overlooked by stochastic methods in complex nonlinear dynamic atmospheric systems.Item Enhancing Satellite Object Localization with Dilated Convolutions and Attention-aided Spatial Pooling(AMLDS, 2025-05-08) Mostafa, Seraj Al Mahmud; Wang, Chenxi; Yue, Jia; Hozumi, Yuta; Wang, JianwuObject localization in satellite imagery is particularly challenging due to the high variability of objects, low spatial resolution, and interference from noise and dominant features such as clouds and city lights. In this research, we focus on three satellite datasets: upper atmospheric Gravity Waves (GW), mesospheric Bores (Bore), and Ocean Eddies (OE), each presenting its own unique challenges. These challenges include the variability in the scale and appearance of the main object patterns, where the size, shape, and feature extent of objects of interest can differ significantly. To address these challenges, we introduce YOLO-DCAP, a novel enhanced version of YOLOv5 designed to improve object localization in these complex scenarios. YOLO-DCAP incorporates a Multi-scale Dilated Residual Convolution (MDRC) block to capture multi-scale features at scale with varying dilation rates, and an Attention-aided Spatial Pooling (AaSP) module to focus on the global relevant spatial regions, enhancing feature selection. These structural improvements help to better localize objects in satellite imagery. Experimental results demonstrate that YOLO-DCAP significantly outperforms both the YOLO base model and state-of-the-art approaches, achieving an average improvement of 20.95% in mAP50 and 32.23% in IoU over the base model, and 7.35% and 9.84% respectively over state-of-the-art alternatives, consistently across all three satellite datasets. These consistent gains across all three satellite datasets highlight the robustness and generalizability of the proposed approach. Our code is open sourced at https://github.com/AI-4-atmosphere-remote-sensing/satellite-object-localization.Item Deep Fusion of Neurophysiological and Facial Features for Enhanced Emotion Detection(IEEE, 2025) Safavi, Farshad; Venkannagari, Vikas Reddy; Parikh, Dev; Vinjamuri, RamanaThe fusion of facial and neurophysiological features for multimodal emotion detection is vital for applications in healthcare, wearable devices, and human-computer interaction, as it enables a more comprehensive understanding of human emotions. Traditionally, the integration of facial expressions and neurophysiological signals has required specialized knowledge and complex preprocessing. With the rise of deep learning and artificial intelligence (AI), new methodologies in affective computing allow for the seamless fusion of multimodal signals, advancing emotion recognition systems. In this paper, we present a novel multimodal deep network that leverages transformers to extract comprehensive features from neurophysiological data, which are then fused with facial expression features for emotion classification. Our transformer-based model analyzes neurophysiological time-series data, while transformer-inspired methods extract facial expression features, enabling the classification of complex emotional states. We compare single modality with multimodal systems, testing our model on Electroencephalography (EEG) signals using the DEAP and Lie Detection datasets. Our hybrid approach effectively captures intricate temporal and spatial patterns in the data, significantly enhancing the system's emotion recognition accuracy. Validated on the DEAP dataset, our method achieves near state-of-the-art performance, with accuracy rates of 97.78%, 97.64%, 97.91%, and 97.62% for arousal, valence, liking, and dominance, respectively. Furthermore, we achieved a precision of 97.9%, a ROC AUC score of 97.6%, an F1-score of 98.1%, and a recall of 98.2%, demonstrating the model's robust performance. We demonstrated the effectiveness of this method, specifically for EEG caps with a limited number of electrodes, in emotion detection for wearable devices.Item Functional evaluation of a real-time EMG controlled prosthetic hand(Cambridge University Press, 2025-04-07) Kalita, Amlan Jyoti; Chanu, Maibam Pooya; Kakoty, Nayan M.; Vinjamuri, Ramana; Borah, SatyajitElectromyogram (EMG)-controlled prosthetic hands have advanced significantly during the past two decades. However, most of the currently available prosthetic hands fail to replicate human hand functionality and controllability. To measure the emulation of the human hand by a prosthetic hand, it is important to evaluate the functional characteristics. Moreover, incorporating feedback from end users during clinical testing is crucial for the precise assessment of a prosthetic hand. The work reported in this manuscript unfolds the functional characteristics of an EMG-CoNtrolled PRosthetIC Hand called ENRICH. ENRICH is a real-time EMG controlled prosthetic hand that can grasp objects in 250.8 ± 1.1 ms, fulfilling the neuromuscular constraint of a human hand. ENRICH is evaluated in comparison to 26 laboratory prototypes and 10 commercial variants of prosthetic hands. The hand was evaluated in terms of size, weight, operation time, weight lifting capacity, finger joint range of motion, control strategy, degrees of freedom, grasp force, and clinical testing. The box and block test and pick and place test showed ENRICH’s functionality and controllability. The functional evaluation reveals that ENRICH has the potential to restore functionality to hand amputees, improving their quality of life.Item Integrating Frequency-Domain Representations with Low-Rank Adaptation in Vision-Language Models(2025-03-08) Khan, Md Azim; Gangopadhyay, Aryya; Wang, Jianwu; Erbacher, Robert F.Situational awareness applications rely heavily on real-time processing of visual and textual data to provide actionable insights. Vision language models (VLMs) have become essential tools for interpreting complex environments by connecting visual inputs with natural language descriptions. However, these models often face computational challenges, especially when required to perform efficiently in real environments. This research presents a novel vision language model (VLM) framework that leverages frequency domain transformations and low-rank adaptation (LoRA) to enhance feature extraction, scalability, and efficiency. Unlike traditional VLMs, which rely solely on spatial-domain representations, our approach incorporates Discrete Fourier Transform (DFT) based low-rank features while retaining pretrained spatial weights, enabling robust performance in noisy or low visibility scenarios. We evaluated the proposed model on caption generation and Visual Question Answering (VQA) tasks using benchmark datasets with varying levels of Gaussian noise. Quantitative results demonstrate that our model achieves evaluation metrics comparable to state-of-the-art VLMs, such as CLIP ViT-L/14 and SigLIP. Qualitative analysis further reveals that our model provides more detailed and contextually relevant responses, particularly for real-world images captured by a RealSense camera mounted on an Unmanned Ground Vehicle (UGV).Item Impact of increased anthropogenic Amazon wildfires on Antarctic Sea ice melt via albedo reduction(Cambridge University Press, 2025-03-10) Chakraborty, Sudip; Devnath, Maloy Kumar; Jabeli, Atefeh; Kulkarni, Chhaya; Boteju, Gehan; Wang, Jianwu; Janeja, VandanaThis study shows the impact of black carbon (BC) aerosol atmospheric rivers (AAR) on the Antarctic Sea ice retreat. We detect that a higher number of BC AARs arrived in the Antarctic region due to increased anthropogenic wildfire activities in 2019 in the Amazon compared to 2018. Our analyses suggest that the BC AARs led to a reduction in the sea ice albedo, increased the amount of sunlight absorbed at the surface, and a significant reduction of sea ice over the Weddell, Ross Sea (Ross), and Indian Ocean (IO) regions in 2019. The Weddell region experienced the largest amount of sea ice retreat (~ 33,000 km²) during the presence of BC AARs as compared to ~13,000 km² during non-BC days. We used a suite of data science techniques, including random forest, elastic net regression, matrix profile, canonical correlations, and causal discovery analyses, to discover the effects and validate them. Random forest, elastic net regression, and causal discovery analyses show that the shortwave upward radiative flux or the reflected sunlight, temperature, and longwave upward energy from the earth are the most important features that affect sea ice extent. Canonical correlation analysis confirms that aerosol optical depth is negatively correlated with albedo, positively correlated with shortwave energy absorbed at the surface, and negatively correlated with Sea Ice Extent. The relationship is stronger in 2019 than in 2018. This study also employs the matrix profile and convolution operation of the Convolution Neural Network (CNN) to detect anomalous events in sea ice loss. These methods show that a higher amount of anomalous melting events were detected over the Weddell and Ross regions.Item Enhancing prosthetic hand control: A synergistic multi-channel electroencephalogram(Cambridge University Press, 2024-11-28) Maibam, Pooya Chanu; Pei, Dingyi; Olikkal, Parthan Sathishkumar; Vinjamuri, Ramana; Kakoty, Nayan M.Electromyogram (EMG) has been a fundamental approach for prosthetic hand control. However it is limited by the functionality of residual muscles and muscle fatigue. Currently, exploring temporal shifts in brain networks and accurately classifying noninvasive electroencephalogram (EEG) for prosthetic hand control remains challenging. In this manuscript, it is hypothesized that the coordinated and synchronized temporal patterns within the brain network, termed as brain synergy, contain valuable information to decode hand movements. 32-channel EEGs were acquired from 10 healthy participants during hand grasp and open. Synergistic spatial distribution pattern and power spectra of brain activity were investigated using independent component analysis of EEG. Out of 32 EEG channels, 15 channels spanning the frontal, central and parietal regions were strategically selected based on the synergy of spatial distribution pattern and power spectrum of independent components. Time-domain and synergistic features were extracted from the selected 15 EEG channels. These features were employed to train a Bayesian optimizer-based support vector machine (SVM). The optimized SVM classifier could achieve an average testing accuracy of 94.39 ±± \pm .84% using synergistic features. The paired t-test showed that synergistic features yielded significantly higher area under curve values (p < .05) compared to time-domain features in classifying hand movements. The output of the classifier was employed for the control of the prosthetic hand. This synergistic approach for analyzing temporal activities in motor control and control of prosthetic hands have potential contributions to future research. It addresses the limitations of EMG-based approaches and emphasizes the effectiveness of synergy-based control for prosthesesItem Electroencephalogram based Control of Prosthetic Hand using Optimizable Support Vector Machine(ACM, 2023-11-02) Pooya Chanu, Maibam; Pei, Dingyi; Olikkal, Parthan Sathishkumar; Vinjamuri, Ramana; Kakoty, Nayan M.Research on electromyogram (EMG) controlled prosthetic hands has advanced significantly, enriching the social and professional lives of people with hand amputation. Even so, the non-functionality of motor neurons in the remnant muscles impedes the generation of EMG as a control signal. However, such people have the same ability as healthy individuals to generate motor cortical activity. The work presented in this paper investigates electroencephalogram (EEG)-based control of a prosthetic hand. EEG of 10 healthy subjects performing the grasping operations were acquired for classification of hand movements. 15 EEG channels were selected to classify hand open and close operations. Hand movement-class-specific time-domain features were extracted from the filtered EEG. A support vector machine (SVM) was employed with 24-fold cross-validation for classification using extracted features. SVM hyper-parameters for the classification model were optimized with a Bayesian optimizer with a minimum prediction error as an objective function. During training and testing of the classifier model, an average accuracy of 96.8 ± 0.98% and 93.4 ± 1.16% respectively, were achieved across the subjects. The trained classifier model was employed to control prosthetic hand open and close operations. This study demonstrates that EEG can be used to control a prosthetic hand by amputees with motor neuron disabilities.Item Decoding motor execution and motor imagery from EEG with deep learning and source localization(Elsevier, 2025-06-01) Kaviri, Sina Makhdoomi; Vinjamuri, RamanaThe use of noninvasive imaging techniques has become pivotal in understanding human brain functionality. While modalities like MEG and fMRI offer excellent spatial resolution, their limited temporal resolution, often measured in seconds, restricts their application in real-time brain activity monitoring. In contrast, EEG provides superior temporal resolution, making it ideal for real-time applications in brain–computer interface systems. In this study, we combined deep learning with source localization to classify two motor task types: motor execution and motor imagery. For motor imagery tasks—left hand, right hand, both feet, and tongue—we transformed EEG signals into cortical activity maps using Minimum Norm Estimation (MNE), dipole fitting, and beamforming. These were analyzed with a custom ResNet CNN, where beamforming achieved the highest accuracy of 99.15%, outperforming most traditional methods. For motor execution involving six types of reach-and-grasp tasks, beamforming achieved 90.83% accuracy compared to 56.39% from a sensor domain approach (ICA + PSD + TSCR-Net). These results underscore the significant advantages of integrating source localization with deep learning for EEG-based motor task classification, demonstrating that source localization techniques greatly enhance classification accuracy compared to sensor domain approaches.Item Decoding and generating synergy-based hand movements using electroencephalography during motor execution and motor imagery(Elsevier, 2025-06-01) Pei, Dingyi; Vinjamuri, RamanaBrain-machine interfaces (BMIs) have proven valuable in motor control and rehabilitation. Motor imagery (MI) is a key tool for developing BMIs, particularly for individuals with impaired limb function. Motor planning and internal programming are hypothesized to be similar during motor execution (ME) and motor imagination. The anatomical and functional similarity between motor execution and motor imagery suggests that synergy-based movement generation can be achieved by extracting neural correlates of synergies or movement primitives from motor imagery. This study explored the feasibility of synergy-based hand movement generation using electroencephalogram (EEG) from imagined hand movements. Ten subjects participated in an experiment to imagine and execute hand movement tasks while their hand kinematics and neural activity were recorded. Hand kinematic synergies derived from executed movements were correlated with EEG spectral features to create a neural decoding model. This model was used to decode the weights of kinematic synergies from motor imagery EEG. These decoded weights were then combined with kinematic synergies to generate hand movements. As a result, the decoding model successfully predicted hand joint angular velocity patterns associated with grasping different objects. This adaptability demonstrates the model's ability to capture the motor control characteristics of ME and MI, advancing our understanding of MI-based neural decoding. The results hold promise for potential applications in noninvasive synergy-based neuromotor control and rehabilitation for populations with upper limb motor disabilities.Item Correlation to Causation: A Causal Deep Learning Framework for Arctic Sea Ice Prediction(2025-03-03) Hossain, Emam; Ferdous, Muhammad Hasan; Wang, Jianwu; Subramanian, Aneesh; Gani, Md OsmanTraditional machine learning and deep learning techniques rely on correlation-based learning, often failing to distinguish spurious associations from true causal relationships, which limits robustness, interpretability, and generalizability. To address these challenges, we propose a causality-driven deep learning framework that integrates Multivariate Granger Causality (MVGC) and PCMCI+ causal discovery algorithms with a hybrid deep learning architecture. Using 43 years (1979-2021) of daily and monthly Arctic Sea Ice Extent (SIE) and ocean-atmospheric datasets, our approach identifies causally significant factors, prioritizes features with direct influence, reduces feature overhead, and improves computational efficiency. Experiments demonstrate that integrating causal features enhances the deep learning model's predictive accuracy and interpretability across multiple lead times. Beyond SIE prediction, the proposed framework offers a scalable solution for dynamic, high-dimensional systems, advancing both theoretical understanding and practical applications in predictive modeling.Item Performance Characteristics of EMG Controlled Prosthetic Hand(ACM, 2023-11-02) Kalita, Amlan Jyoti; Chanu, Maibam Pooya; Kakoty, Nayan M.; Vinjamuri, Ramana; Borah, SatyajitMuch development has been seen in commercial and laboratory prototypes of prosthetic hands during last two decades. However, prosthetic hands emulating human hand characteristics are very limited. In order to emulate human hand, performance characteristics evaluation of prosthetic hands is of paramount importance. This paper explains the performance characteristics of an EMG CoNtrolled PRosthetIC Hand called ENRICH involving end users’ feedback from clinical testing. ENRICH is a real time EMG controlled prosthetic hand that can perform grasping operations in 250 ± 1.1 milliseconds satisfying the neuromuscular constraint of human hand. The performance characteristics of ENRICH vis-à-vis commercial and laboratory prototypes are evaluated in terms of weight, size, degrees of freedom, finger joint range of motion, control strategy, operation time and clinical testing. This evaluation establishes ENRICH as one of the promising prosthetic hands with tangible benefits to amputees.Item Accurate and Interpretable Radar Quantitative Precipitation Estimation with Symbolic Regression(IEEE, 2025-01-16) Zhang, Olivia; Grissom, Brianna; Pulido, Julian; Munoz-Ordaz, Kenia; He, Jonathan; Cham, Mostafa; Jing, Haotong; Qian, Weikang; Wen, Yixin; Wang, JianwuAccurate quantitative precipitation estimation (QPE) is essential for managing water resources, monitoring flash floods, creating hydrological models, and more. Traditional methods of obtaining precipitation data from rain gauges and radars have limitations such as sparse coverage and inaccurate estimates for different precipitation types and intensities. Symbolic regression, a machine learning method that generates mathematical equations fitting the data, presents a unique approach to estimating precipitation that is both accurate and interpretable. Using WSR-88D dual-polarimetric radar data from Oklahoma and Florida over three dates, we tested symbolic regression models involving genetic programming and deep learning, symbolic regression on separate clusters of the data, and the incorporation of knowledge-based loss terms into the loss function. We found that symbolic regression is both accurate in estimating rainfall and interpretable through learned equations. Accuracy and simplicity of the learned equations can be slightly improved by clustering the data based on select radar variables and by adjusting the loss function with knowledge-based loss terms. This research provides insights into improving QPE accuracy through interpretable symbolic regression methodsItem A Framework for Empirical Fourier Decomposition based Gesture Classification for Stroke Rehabilitation(IEEE, 2024-11-11) Chen, Ke; Wang, Honggang; Catlin, Andrew; Satyanarayana, Ashwin; Vinjamuri, Ramana; Kadiyala, Sai PraveenThe demand for surface electromyography (sEMG) based exoskeletons is rapidly increasing due to their non-invasive nature and ease of use. With increase in use of Internet-of-Things (IoT) based devices in daily life, there is a greater acceptance of exoskeleton based rehab. As a result, there is a need for highly accurate and generalizable gesture classification mechanisms based on sEMG data. In this work, we present a framework which pre-processes raw sEMG signals with Empirical Fourier Decomposition (EFD) based approach followed by dimension reduction. This resulted in improved performance of the hand gesture classification. EFD decomposition’s efficacy of handling mode mixing problem on non-stationary signals, resulted in less number of decomposed components. In the next step, a thorough analysis of decomposed components as well as inter-channel analysis is performed to identify the key components and channels that contribute towards the improved gesture classification accuracy. As a third step, we conducted ablation studies on time-domain features to observe the variations in accuracy on different models. Finally, we present a case study of comparison of automated feature extraction based gesture classification vs. manual feature extraction based methods. Experimental results show that manual feature based gesture classification method thoroughly outperformed automated feature extraction based methods, thus emphasizing a need for rigorous fine tuning of automated models.Item Tutorial on Causal Inference with Spatiotemporal Data(ACM, 2024-11-04) Ali, Sahara; Wang, JianwuSpatiotemporal data, which captures how variables evolve across space and time, is ubiquitous in fields such as environmental science, epidemiology, and urban planning. However, identifying causal relationships in these datasets is challenging due to the presence of spatial dependencies, temporal autocorrelation, and confounding factors. This tutorial provides a comprehensive introduction to spatiotemporal causal inference, offering both theoretical foundations and practical guidance for researchers and practitioners. We explore key concepts such as causal inference frameworks, the impact of confounding in spatiotemporal settings, and the challenges posed by spatial and temporal dependencies. The paper covers synthetic spatiotemporal benchmark data generation, widely used spatiotemporal causal inference techniques, including regression-based, propensity score-based, and deep learning-based methods, and demonstrates their application using synthetic datasets. Through step-by-step examples, readers will gain a clear understanding of how to address common challenges and apply causal inference techniques to spatiotemporal data. This tutorial serves as a valuable resource for those looking to improve the rigor and reliability of their causal analyses in spatiotemporal contexts.Item Accelerating Subglacial Bed Topography Prediction in Greenland: A Performance Evaluation of Spark-Optimized Machine Learning Models(2024) Cham, Mostafa; Tabassum, Tartela; Shakeri, Ehsan; Wang, JianwuItem Identifying neurophysiological correlates of stress(frontiers, 2024-10-24) Pei, Dingyi; Tirumala, Shravika; Tun, Kyaw T.; Ajendla, Akshara; Vinjamuri, RamanaStress has been recognized as a pivotal indicator which can lead to severe mental disorders. Persistent exposure to stress will increase the risk for various physical and mental health problems. Early and reliable detection of stress-related status is critical for promoting wellbeing and developing effective interventions. This study attempted multi-type and multi-level stress detection by fusing features extracted from multiple physiological signals including electroencephalography (EEG) and peripheral physiological signals. Eleven healthy individuals participated in validated stress-inducing protocols designed to induce social and mental stress and discriminant multi-level and multi-type stress. A range of machine learning methods were applied and evaluated on physiological signals of various durations. An average accuracy of 98.1% and 97.8% was achieved in identifying stress type and stress level respectively, using 4-s neurophysiological signals. These findings have promising implications for enhancing the precision and practicality of real-time stress monitoring applications.Item Investigating Causal Cues: Strengthening Spoofed Audio Detection with Human-Discernible Linguistic Features(2024-09-09) Khanjani, Zahra; Ale, Tolulope; Wang, Jianwu; Davis, Lavon; Mallinson, Christine; Janeja, VandanaSeveral types of spoofed audio, such as mimicry, replay attacks, and deepfakes, have created societal challenges to information integrity. Recently, researchers have worked with sociolinguistics experts to label spoofed audio samples with Expert Defined Linguistic Features (EDLFs) that can be discerned by the human ear: pitch, pause, word-initial and word-final release bursts of consonant stops, audible intake or outtake of breath, and overall audio quality. It is established that there is an improvement in several deepfake detection algorithms when they augmented the traditional and common features of audio data with these EDLFs. In this paper, using a hybrid dataset comprised of multiple types of spoofed audio augmented with sociolinguistic annotations, we investigate causal discovery and inferences between the discernible linguistic features and the label in the audio clips, comparing the findings of the causal models with the expert ground truth validation labeling process. Our findings suggest that the causal models indicate the utility of incorporating linguistic features to help discern spoofed audio, as well as the overall need and opportunity to incorporate human knowledge into models and techniques for strengthening AI models. The causal discovery and inference can be used as a foundation of training humans to discern spoofed audio as well as automating EDLFs labeling for the purpose of performance improvement of the common AI-based spoofed audio detectors.