Purushotham, SanjayRahman, Md Mahmudur2024-09-062024-09-062024/01/0112941http://hdl.handle.net/11603/36085Survival analysis, or time-to-event analysis, aims to predict the time until an event occurs, providing valuable insights into the temporal aspects of various phenomena, such as disease progression. This dissertation addresses the growing need for fair and interpretable machine learning models in survival analysis within the healthcare domain, alongside the necessity for privacy-preserving distributed training methods to enhance generalization and data utilization. In particular, we focus on the following problems: 1) how to efficiently handle censoring, i.e., incomplete survival outcomes; 2) how to make unbiased estimations of the survival analysis quantities in the presence of competing risks and multi-state transitions; 3) how to enhance the interpretability and fairness of survival analysis models; and 4) how to enable privacy-preserving distributed training of survival models to address limited data utilization and lack of generalization due to strict privacy laws such as GDPR and HIPAA. In this dissertation, we provide the following solutions to these problems: 1) To address the censoring challenge, we utilize theoretically consistent pseudo-values from statistical paradigms, simplifying the problem to a regression analysis task. 2) To provide unbiased survival predictions for complex problems like competing risk and multi-state survival analysis, we introduce novel pseudo-value-based deep learning models, DeepPseudo and msPseudo. 3) To enhance interpretability, we propose a pseudo-value-based neural additive model, PseudoNAM, which achieves performance comparable to deep models while offering global and feature-level interpretations. Additionally, we propose the Fair DeepPseudo and Fair PseudoNAM models, incorporating new fairness constraints into a novel pseudo-value-based objective function to ensure equitable and trustworthy survival predictions in the presence of demographic and censoring bias. 4) To enable multi-institution collaboration while preserving data privacy, we introduce federated learning frameworks, FedPseudo for survival analysis and Fedora for competing risk analysis. Furthermore, we introduce a random forest-based federated survival analysis (FSA) framework, FedPRF, to address the communication burden in model exchange and a pioneering fair FSA framework, FairFSA, which integrates fairness through distributionally robust optimization to ensure equitable global survival predictions across clients. We evaluated our approaches on both centralized and decentralized survival datasets, achieving significant improvements over existing methods. These advancements are expected to facilitate better decision-making, optimize healthcare resource allocation, reduce costs, improve treatment interventions and therapy strategies, enhance patient care, and ultimately contribute to improved survival outcomes.application:pdfThis item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.eduFair and Interpretable Pseudo Value-Based Deep Learning Models for Federated Survival AnalysisText