UMBC Computer Science and Electrical Engineering Department
Permanent URI for this collection
The Computer Science and Electrical Engineering Department aims to maintain a program of excellence in teaching, research, and service for all of its programs. At the undergraduate level, we will provide students with a firm foundation of both the theory and practice of Computer Science and Computer Engineering. Our curricula also give students the social, ethical, and liberal education needed to make significant contributions to society. Students receiving a bachelor’s degree are ready to enter the work force as productive computer scientists or computer engineers, or to continue their education at the graduate or professional level.
At the graduate level, we are committed to developing the research and professional capabilities of students in Computer Science, Computer Engineering, Electrical Engineering and Cybersecurity. Our programs provide a deeper mastery of the basics of these fields, as well as opportunities to collaborate on leading-edge research with our faculty. Our faculty are engaged in both practical and theoretical research, often in partnership with government agencies, private industry and non-governmental organizations. The aim of this research is to advance knowledge within our disciplines and also to contribute to solving problems faced by our society.
Browse
Recent Submissions
Item Using Large Language Models to Extract Planning Knowledge from Common Vulnerabilities and Exposures(International Conference on Automated Planning and Scheduling (ICAPS), 2024) Oates, Tim; Alford, Ron; Johnson, Shawn; Hall, CoryUnderstanding attackers’ goals and plans is crucial for cyber defense, which relies on understanding the basic steps that attackers can take to exploit vulnerabilities. There is a wealth of knowledge about vulnerabilities in text, such as Common Vulnerabilities and Exposures (CVEs), that is accessible to humans but not machines. This paper presents a system, called CLLaMP, that uses large language models (LLMs) to extract declarative representations of CVEs as planning operators represented using the Planning Domain Description Language (PDDL). CLLaMP ingests CVEs, stores them in a database, uses an LLM to extract a PDDL action that specifies preconditions for, and the effects of, the exploit, and updates the database with the action. The resulting planning operators can be used for automatically recognizing attacker plans in real time. We propose metrics for evaluating the quality of extracted operators and show the translation results for a set of randomly selected CVEs.Item Fusion of Novel FMRI Features Using Independent Vector Analysis for a Multifaceted Characterization of Schizophrenia(European Association for Signal Processing (EURASIP), 2024) Jia, Chunying; Akhonda, Mohammad Abu Baker Siddique; Yang, Hanlu; Calhoun, Vince D.; Adali, TulayThe fractional amplitude of low-frequency fluctuation (fALFF) is a widely used feature for resting-state functional magnetic resonance (fMRI) analysis but captures limited information. Here, we propose two novel features, maxTP (maximum amplitude across time points) and max-RSN (maximum values across resting-state networks), that capture temporal peaks and salient spatial components, respectively. Using fMRI data from the Bipolar and Schizophrenia Network for the Intermediate Phenotypes project, we constructed a dataset by combining fALFF with the proposed features. Subsequently, we applied a data fusion framework by utilizing independent vector analysis on this dataset, leveraging both second and higherorder statistical information. Our analysis revealed significant group differences between schizophrenia patients and healthy controls in various brain regions. Notably, differences in the visual cortex were detected across all three feature datasets, suggesting its potential as a schizophrenia biomarker across different measures. Thus, by incorporating new features and a multi-feature data fusion approach, this study provides insights into the multifaceted nature of brain alterations in schizophrenia, emphasizing the importance of conducting neuroimaging analyses with complementary features to explore brain activity changes in psychiatric conditions.Item Assessing Pediatric Cognitive Development via Multisensory Brain Imaging Analysis(European Association for Signal Processing (EURASIP), 2024) Belyaeva, Irina; Wang, Yu-Ping; Wilson, Tony W.; Calhoun, Vince D.; Stephen, Julia M.; Adali, TulayAdolescence is a special period between childhood and adulthood and constitutes a critical developmental stage for humans. During adolescence, the brain processes various stimuli to form a complete view of the world. This study highlights the critical role of multisensory integration, where the brain processes multiple senses together rather than focusing on just one sensory modality at a time. Brain imaging modalities such as magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) can be utilized to gain insights into the non-additive effects of multisensory integration by fusing data across different sensory stimuli in both time and space. While MEG and fMRI are powerful tools, traditional approaches to combining data from these modalities often ignore their multisensory aspect, focusing instead on single tasks. To leverage their complementarity, we introduce a multitask learning multimodal data fusion framework for joint learning of multisensory brain developmental patterns from MEG and fMRI data through a novel application of coupled canonical polyadic tensor decomposition. The multitask learning paradigm performs multimodal fusion from multiple sensory stimuli using multitask coupled tensor-tensor factorization (MCTTF). We demonstrate that multitask multimodal fusion of MEG and fMRI data can identify unique brain components, demonstrating a higher grouplevel multisensory integration effect.Item Illuminating the Shadows - Challenges and Risks of Generative AI in Computer Vision for Brands(2024-08-25) Kaplunovich, AlexThe rapid advancements in generative AI have significantly transformed computer vision, presenting both opportunities and challenges for brands. This paper digs into the risks associated with the use of generative AI in computer vision applications, particularly focusing on brand integrity, detection and security. One primary concern is the ethical implications, where LLMs can amplify biases, produce fake product images and propagate harmful stereotypes, affecting brand reputation. The rise of deepfakes and AI-generated content poses a substantial risk of disinformation, leading to potential misuse in creating misleading advertisements or damaging a brand's image through falsified media. Legal challenges are another critical aspect, especially concerning intellectual property rights and copyright issues. The ability of generative AI to produce content indistinguishable from original works raises questions about owner-ship, detection techniques and the legal frameworks required to protect brands. To address these tasks, the paper explores various generation, detection, and mitigation strategies emphasizing the importance of developing responsible and trustworthy generative AI technologies. By highlighting these issues, the paper aims to foster a balanced discourse on the ethical and practical aspects of generative AI in computer vision for brands, shares detection results, and suggests mitigation strategies.Item Forecast Aware Model-Driven Neural Learning for Air Quality Bias Correction(IEEE, 2024-09-05) Hamer, Sophia; Sleeman, Jennifer; Stajner, IvankaPoor air quality can have a significant impact on human health. The National Oceanic and Atmospheric Administration (NOAA) air quality forecasting guidance is challenged by the increasing presence of extreme air quality events due to extreme weather events such as wild fires and heatwaves. These extreme air quality events further affect human health. Traditional methods used to correct model bias make assumptions about linearity and the underlying distribution. Extreme air quality events tend to occur without a strong signal leading up to the event and this behavior tends to cause existing methods to either under or over compensate for the bias. Deep learning holds promise for air quality forecasting in the presence of extreme air quality events due to its ability to generalize and learn nonlinear problems. However, in the presence of these anomalous air quality events, standard deep network approaches that use a single network for generalizing to future forecasts, may not always provide the best performance even with a full feature-set including geography and meteorology. In this work we describe a method that combines unsupervised learning and a forecast-aware bi-directional Long Short-Term Memory (LSTM) network to perform bias correction for operational air quality forecasting using AirNow station data for ozone and PM2.5 in the continental US. Using an unsupervised clustering method trained on station geographical features such as latitude and longitude, urbanization, and elevation, the learned clusters direct training by partitioning the training data for the LSTM networks. LSTMs are forecast-aware and implemented using a unique way to perform learning forward and backwards in time across forecasting days. When comparing the Root Mean Squared Error (RMSE) of the forecast model to the RMSE of the bias corrected model, the bias corrected model shows significant improvement - 27% lower RMSE for ozone - over the base forecast.Item Cognitive Networks and Performance Drive fMRI-Based State Classification Using DNN Models(2024-08-14) Kucukosmanoglu, Murat; Garcia, Javier O.; Brooks, Justin; Bansal, KanikaDeep neural network (DNN) models have demonstrated impressive performance in various domains, yet their application in cognitive neuroscience is limited due to their lack of interpretability. In this study we employ two structurally different and complementary DNN-based models, a one-dimensional convolutional neural network (1D-CNN) and a bidirectional long short-term memory network (BiLSTM), to classify individual cognitive states from fMRI BOLD data, with a focus on understanding the cognitive underpinnings of the classification decisions. We show that despite the architectural differences, both models consistently produce a robust relationship between prediction accuracy and individual cognitive performance, such that low performance leads to poor prediction accuracy. To achieve model explainability, we used permutation techniques to calculate feature importance, allowing us to identify the most critical brain regions influencing model predictions. Across models, we found the dominance of visual networks, suggesting that task-driven state differences are primarily encoded in visual processing. Attention and control networks also showed relatively high importance, however, default mode and temporal-parietal networks demonstrated negligible contribution in differentiating cognitive states. Additionally, we observed individual trait-based effects and subtle model-specific differences, such that 1D-CNN showed slightly better overall performance, while BiLSTM showed better sensitivity for individual behavior; these initial findings require further research and robustness testing to be fully established. Our work underscores the importance of explainable DNN models in uncovering the neural mechanisms underlying cognitive state transitions, providing a foundation for future work in this domain.Item On the Baltimore Light RailLink into the quantum future(2024-06-17) Domino, Krzysztof; Doucet, Emery; Robertson, Reece; Gardas, Bart?omiej; Deffner, SebastianIn the current era of noisy intermediate-scale quantum (NISQ) technology, quantum devices present new avenues for addressing complex, real-world challenges including potentially NP-hard optimization problems. This work aims to showcase how the inherent noise in NISQ devices can be leveraged to solve such real-world problems effectively. Utilizing a D-Wave quantum annealer and IonQ's gate-based NISQ computers, we generate and analyze solutions for managing train traffic under stochastic disturbances. Our case study focuses on the Baltimore Light RailLink, which embodies the characteristics of both tramway and railway networks. We explore the feasibility of using NISQ technology to model the stochastic nature of disruptions in these transportation systems. Our research marks the inaugural application of both quantum computing paradigms to tramway and railway rescheduling, highlighting the potential of quantum noise as a beneficial resource in complex optimization scenarios.Item Flood-ResNet50: Optimized Deep Learning Model for Efficient Flood Detection on Edge Device(IEEE, 2024-03-19) Khan, Md Azim; Ahmed, Nadeem; Padela, Joyce; Raza, Muhammad Shehrose; Gangopadhyay, Aryya; Wang, Jianwu; Foulds, James; Busart, Carl; Erbacher, Robert F.Floods are highly destructive natural disasters that result in significant economic losses and endanger human and wildlife lives. Efficiently monitoring Flooded areas through the utilization of deep learning models can contribute to mitigating these risks. This study focuses on the deployment of deep learning models specifically designed for classifying flooded and non-flooded in UAV images. In consideration of computational costs, we propose modified version of ResNet50 called Flood-ResNet50. By incorporating additional layers and leveraging transfer learning techniques, Flood-ResNet50 achieves comparable performance to larger models like VGG16/19, AlexNet, DenseNet161, EfficientNetB7, Swin(small), and vision transformer. Experimental results demonstrate that the proposed modification of ResNet50, incorporating additional layers, achieves a classification accuracy of 96.43%, F1 score of 86.36%, Recall of 81.11%, Precision of 92.41 %, model size 98MB and FLOPs 4.3 billions for the FloodNet dataset. When deployed on edge devices such as the Jetson Nano, our model demonstrates faster inference speed (820 ms), higher throughput (39.02 fps), and lower average power consumption (6.9 W) compared to larger ResNet101 and ResNet152 models.Item Drug Abuse Ontology to Harness Web-Based Data for Substance Use Epidemiology Research: Ontology Development Study(JMIR, 2022-12-23) Lokala, Usha; Lamy, Francois; Daniulaityte, Raminta; Gaur, Manas; Gyrard, Amelie; Thirunarayan, Krishnaprasad; Kursuncu, Ugur; Sheth, AmitBackground: Web-based resources and social media platforms play an increasingly important role in health-related knowledge and experience sharing. There is a growing interest in the use of these novel data sources for epidemiological surveillance of substance use behaviors and trends.;Objective: The key aims were to describe the development and application of the drug abuse ontology (DAO) as a framework for analyzing web-based and social media data to inform public health and substance use research in the following areas: determining user knowledge, attitudes, and behaviors related to nonmedical use of buprenorphine and illicitly manufactured opioids through the analysis of web forum data Prescription Drug Abuse Online Surveillance; analyzing patterns and trends of cannabis product use in the context of evolving cannabis legalization policies in the United States through analysis of Twitter and web forum data (eDrugTrends); assessing trends in the availability of novel synthetic opioids through the analysis of cryptomarket data (eDarkTrends); and analyzing COVID-19 pandemic trends in social media data related to 13 states in the United States as per Mental Health America reports.;Methods: The domain and scope of the DAO were defined using competency questions from popular ontology methodology (101 ontology development). The 101 method includes determining the domain and scope of ontology, reusing existing knowledge, enumerating important terms in ontology, defining the classes, their properties and creating instances of the classes. The quality of the ontology was evaluated using a set of tools and best practices recognized by the semantic web community and the artificial intelligence community that engage in natural language processing.;Results: The current version of the DAO comprises 315 classes, 31 relationships, and 814 instances among the classes. The ontology is flexible and can easily accommodate new concepts. The integration of the ontology with machine learning algorithms dramatically decreased the false alarm rate by adding external knowledge to the machine learning process. The ontology is recurrently updated to capture evolving concepts in different contexts and applied to analyze data related to social media and dark web marketplaces.;Conclusions: The DAO provides a powerful framework and a useful resource that can be expanded and adapted to a wide range of substance use and mental health domains to help advance big data analytics of web-based data for substance use epidemiology research.;Trial Registration:Item Semantically-informed Hierarchical Event Modeling(ACL, 2023-07) Roy Dipta, Shubhashis; Rezaee, Mehdi; Ferraro, FrancisPrior work has shown that coupling sequential latent variable models with semantic ontological knowledge can improve the representational capabilities of event modeling approaches. In this work, we present a novel, doubly hierarchical, semi-supervised event modeling framework that provides structural hierarchy while also accounting for ontological hierarchy. Our approach consistsof multiple layers of structured latent variables, where each successive layer compresses and abstracts the previous layers. We guide this compression through the injection of structured ontological knowledge that is defined at the type level of events: importantly, our model allows for partial injection of semantic knowledge and it does not depend on observing instances at any particular level of the semantic ontology. Across two different datasets and four different evaluation metrics, we demonstrate that our approach is able to out-perform the previous state-of-the-art approaches by up to 8.5%, demonstrating the benefits of structured and semantic hierarchical knowledge for event modeling.Item Multimodal Language Learning for Object Retrieval in Low Data Regimes in the Face of Missing Modalities(OpenReview, 2023-08-11) Darvish, Kasra; Raff, Edward; Ferraro, Francis; Matuszek, CynthiaOur study is motivated by robotics, where when dealing with robots or other physical systems, we often need to balance competing concerns of relying on complex, multimodal data coming from a variety of sensors with a general lack of large representative datasets. Despite the complexity of modern robotic platforms and the need for multimodal interaction, there has been little research on integrating more than two modalities in a low data regime with the real-world constraint that sensors fail due to obstructions or adverse conditions. In this work, we consider a case in which natural language is used as a retrieval query against objects, represented across multiple modalities, in a physical environment. We introduce extended multimodal alignment (EMMA), a method that learns to select the appropriate object while jointly refining modality-specific embeddings through a geometric (distance-based) loss. In contrast to prior work, our approach is able to incorporate an arbitrary number of views (modalities) of a particular piece of data. We demonstrate the efficacy of our model on a grounded language object retrieval scenario. We show that our model outperforms state-of-the-art baselines when little training data is available. Our code is available at https://github.com/kasraprime/EMMA.Item Privacy-Preserving Data Sharing in Agriculture: Enforcing Policy Rules for Secure and Confidential Data Synthesis(IEEE, 2023-12) Kotal, Anantaa; Elluri, Lavanya; Gupta, Deepti; Mandalapu, Varun; Joshi, AnupamBig Data empowers the farming community with the information needed to optimize resource usage, increase productivity, and enhance the sustainability of agricultural practices. The use of Big Data in farming requires the collection and analysis of data from various sources such as sensors, satellites, and farmer surveys. While Big Data can provide the farming community with valuable insights and improve efficiency, there is significant concern regarding the security of this data as well as the privacy of the participants. Privacy regulations, such as the European Union’s General Data Protection Regulation (GDPR), the EU Code of Conduct on agricultural data sharing by contractual agreement, and the proposed EU AI law, have been created to address the issue of data privacy and provide specific guidelines on when and how data can be shared between organizations. To make confidential agricultural data widely available for Big Data analysis without violating the privacy of the data subjects, we consider privacy-preserving methods of data sharing in agriculture. Synthetic data that retains the statistical properties of the original data but does not include actual individuals’ information provides a suitable alternative to sharing sensitive datasets. Deep learning-based synthetic data generation has been proposed for privacy-preserving data sharing. However, there is a lack of compliance with documented data privacy policies in such privacy-preserving efforts. In this study, we propose a novel framework for enforcing privacy policy rules in privacy-preserving data generation algorithms. We explore several available agricultural codes of conduct, extract knowledge related to the privacy constraints in data, and use the extracted knowledge to define privacy bounds in a privacy-preserving generative model. We use our framework to generate synthetic agricultural data and present experimental results that demonstrate the utility of the synthetic dataset in downstream tasks. We also show that our framework can evade potential threats, such as re-identification and linkage issues, and secure data based on applicable regulatory policy rules.Item Employing word-embedding for schema matching in standard lifecycle management(Elsevier, 2024-03-01) Oh, Hakju; Kulvatunyou, Boonserm (Serm); Jones, Albert; Finin, TimToday, businesses rely on numerous information systems to achieve their production goals and improve their global competitiveness. Semantically integrating those systems is essential for businesses to achieve both. To do so, businesses must rely on standards, the most important of which are data exchange standards (DES). DES focus on technical and business semantics that are needed to deliver quality and timely products and services. Consequently, the ability for businesses to quickly use and adapt DES to their innovations and processes is crucial. Traditionally, information standards are managed and used 1) in a platform-specific form and 2) usually with standalone and file-based applications. These traditional approaches no longer meet today's business and information agility needs. For example, businesses now must deal with companies and suppliers that use heterogeneous syntaxes for their information. Syntaxes that are optimized for individual but have different objectives. Moreover, file-based standards and the usage specifications derived from the standards cause inconsistencies since there is neither a single standard format for each usage specification nor a single source of truth for all of them. As the number and types of information systems grow, developing, maintaining, reviewing, and approving standards and their derived usage specifications are becoming more difficult and time consuming. Each file-based usage specification is typically based on a different syntax than the standard syntax. As a result, each usage specification must be manually updated as the standard evolves; this can cause significant delays and costs in adopting the new and better standard versions. National Institute of Standards and Technology (NIST) in collaboration with the Open Application Groups Inc. (OAGi) has developed a web-based standard lifecycle management tool called SCORE to address these problems. The objective of this paper is to introduce the SCORE tool and discuss its particular functionality where a word-embedding technique has been employed along with other schema-matching approaches. Together they can assist standard users in updating the usage specification due to the release of new version of a standard leading to faster adaptations of DES to new processes.Item Leveraging semantic context to establish access controls for secure cloud-based electronic health records(Elsevier, 2024-04-01) Walid, Redwan; Joshi, Karuna; Choi, Seung GeolWith the continuous growth of cloud-based Electronic Health Record (EHR) systems and medical data, medical organizations are particularly concerned about storing patient data to provide fast services while adhering to privacy and security concerns. Existing EHR systems often face challenges in handling heterogeneous data and maintaining good performance with data growth. These systems mostly use relational databases or partially store data in a knowledge graph, making it challenging to handle big data and allowing flexible schema expansion. Hence, there is a need to address these problems. This paper provides a solution by proposing a novel graph-based EHR system integrating Attribute-Based Encryption and Semantic Web Technologies to ensure fine-grained EHR field-level security of patient records. Our approach leverages semantic context to query through a knowledge graph that stores encrypted medical data in the nodes, making it possible to handle heterogeneous data while ensuring optimal performance and preserving patient privacy.Item BERALL: Towards Generating Retrieval-augmented State-based Interactive Fiction GamesChambers, Rachel; Tack, Naomi; Pearson, Eliot; Martin, Lara J; Ferraro, FrancisInteractive fiction (IF) games are a genre of games where the player interacts with the fictional world via text-based commands, solving puzzles primarily by exploring the world and using items they collect along the way. Although there has been much work on playing IF using AI, there is relatively less work on the creation of such games using AI. While large language models (LLMs) have made the generation of text far easier in the past several years, they still struggle to generate the highly-structured and consistent story worlds that one might see in IF. We present a threepart system called BERALL, which generates unique text adventure games by 1) maintaining the current state of the story world, 2) using retrieval-augmented generation (RAG) to create relevant location descriptions, and 3) combining these components to create a coherent experience for the player. Our approach is effective at generating room and story descriptions from the setting and knowledge graphs, demonstrating the potential benefits of LLMs in IF generation. We find challenges remain in maintaining the current game state due, in part, to LLMs not understanding the impact of changes to the knowledge graph generated by the player’s command.Item Variability of Eastern North Atlantic Summertime Marine Boundary Layer Clouds and Aerosols Across Different Synoptic Regimes Identified with Multiple Conditions(2024-08-22) Zheng, Xue; Qiu, Shaoyue; Zhang, Damao; Adebiyi, Adeyemi A.; Zheng, Xiaojian; Faruque, Omar; Tao, Cheng; Wang, JianwuThis study estimates the meteorological covariations of aerosol and marine boundary layer (MBL) cloud properties in the Eastern North Atlantic (ENA) region, characterized by diverse synoptic conditions. Using a deep-learning-based clustering model with mid-level and surface daily meteorological data, we identify seven distinct synoptic regimes during the summer from 2016 to 2021. Our analysis, incorporating reanalysis data and satellite retrievals, shows that surface aerosols and MBL clouds exhibit clear regime-dependent characteristics, while lower tropospheric aerosols do not. This discrepancy likely arises synoptic regimes determined by daily large-scale conditions may overlook air mass histories that predominantly dictate lower tropospheric aerosol conditions. Focusing on three regimes dominated by northerly winds, we analyze the Atmospheric Radiation Measurement Program (ARM) ENA observations on Graciosa Island in the Azores. In the subtropical anticyclone regime, fewer cumulus clouds and more single-layer stratocumulus clouds with light drizzles are observed, along with the highest cloud droplet number concentration (Nd), surface Cloud Condensation Nuclei (CCN) and surface aerosol levels. The post-trough regime features more broken or multi-layer stratocumulus clouds with slightly higher surface rain rate, and lower Nd and surface CCN levels. The weak trough regime is characterized by the deepest MBL clouds, primarily cumulus and broken stratocumulus clouds, with the strongest surface rain rate and the lowest Nd, surface CCN and surface aerosol levels, indicating strong wet scavenging. These findings highlight the importance of considering the covariation of cloud and aerosol properties driven by large-scale regimes when assessing aerosol indirect effects using observations.Item A Human-Centric Comparative Analysis of Trajectory Design Methods for Multi-Body Dynamics(2024-08) Martinez-Samaniego, Edison; Aros, Michelle; Bader, Laith; Schmitt, Justin; Auerback, Lily; Anderson, Joseph; Cooks, Karis; Canales Garcia, David; Chaparro, Barbara; Guzzetti, DavideTrajectory design in the CR3BP is a complex but crucial process for understanding systems in the realm of multiple celestial bodies. By focusing on a human factor's perspective, this research aims to evaluate the use of augmented reality (AR) and virtual reality (VR) for solving trajectories. While preliminary work indicates AR's potential in enhancing user interface intuitiveness and visualization when compared to traditional software like FreeFlyer or STK, this study focuses on the usability, efficiency, and visualization aspects of all tools, providing insights into each method's effectiveness and user-friendliness. Overall, this research contributes to advancing mission planning efficiency and human-centered design principles in spacecraft trajectory design and examining the opportunity cost of maintaining the current status quo.Item Unboxing Occupational Bias: Grounded Debiasing of LLMs with U.S. Labor Data(2024-08-20) Gorti, Atmika; Gaur, Manas; Chadha, AmanLarge Language Models (LLMs) are prone to inheriting and amplifying societal biases embedded within their training data, potentially reinforcing harmful stereotypes related to gender, occupation, and other sensitive categories. This issue becomes particularly problematic as biased LLMs can have far-reaching consequences, leading to unfair practices and exacerbating social inequalities across various domains, such as recruitment, online content moderation, or even the criminal justice system. Although prior research has focused on detecting bias in LLMs using specialized datasets designed to highlight intrinsic biases, there has been a notable lack of investigation into how these findings correlate with authoritative datasets, such as those from the U.S. National Bureau of Labor Statistics (NBLS). To address this gap, we conduct empirical research that evaluates LLMs in a ``bias-out-of-the-box" setting, analyzing how the generated outputs compare with the distributions found in NBLS data. Furthermore, we propose a straightforward yet effective debiasing mechanism that directly incorporates NBLS instances to mitigate bias within LLMs. Our study spans seven different LLMs, including instructable, base, and mixture-of-expert models, and reveals significant levels of bias that are often overlooked by existing bias detection techniques. Importantly, our debiasing method, which does not rely on external datasets, demonstrates a substantial reduction in bias scores, highlighting the efficacy of our approach in creating fairer and more reliable LLMs.Item Substrate optimization with the adjoint method and layered medium Green’s functions(Optica, 2024-09-12) Simsek, Ergun; Islam, Raonaqul; Oishe, Sumya H.; Menyuk, CurtisIn recent years, the photonics community has shown increasing interest in the inverse design of photonic components and devices using the adjoint method (AM) due to its efficient gradient computation and suitability for large parameter and continuous design spaces. This work focuses on substrate optimization to maximize light transmission or field enhancement at specific locations using layered medium Green’s functions (LMGFs). We first provide a numerical formulation for calculating two-dimensional (2D) LMGFs, leveraging their efficiency for fixed sources and observation points parallel to layer interfaces. We then present a step-by-step implementation of the AM for substrate optimization using LMGFs. Through numerical studies, we verify the field enhancement achieved with AM-designed substrates using a frequency-domain solver. We compare the results of AM with particle swarm optimization (PSO) for two optimization problems, demonstrating that AM not only generates realistic designs with smooth permittivity profiles but also achieves inverse design more efficiently than PSO. The AM designs are easier to fabricate and require significantly less computational effort due to the efficient gradient computation inherent in the method. This study underscores the advantages of AM in designing photonic devices with continuous parameter spaces.Item Study of an MoS₂ phototransistor using a compact numerical method enabling detailed analysis of 2D material phototransistors(Nature, 2024-07-03) Islam, Raonaqul; Anjum, Ishraq Md; Menyuk, Curtis; Simsek, ErgunResearch on two-dimensional material-based phototransistors has recently become a topic of great interest. However, the high number of design features, which impact the performance of these devices, and the multi-physical nature of the device operation make the accurate analysis of these devices a challenge. Here, we present a simple yet effective numerical framework to overcome this challenge. The one-dimensional framework is constructed on the drift-diffusion equations, Poisson’s equation, and wave propagation in multi-layered medium formalism. We apply this framework to study phototransistors made from monolayer molybdenum disulfide (MoS₂) placed on top of a back-gated silicon-oxide-coated silicon substrate. Numerical results, which show good agreement with the experimental results found in the literature, emphasize the necessity of including the inhomogeneous background for accurately calculating device metrics such as quantum efficiency and bandwidth. For the first time in literature, we calculate the phase noise of these phototransistors, which is a crucial performance metric for many applications where precise timing and synchronization are critical. We determine that applying a low drain-to-source voltage is the key requirement for low phase noise.