Browsing by Author "Morris, Michael"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Active semi-supervised expectation maximization learning for lung cancer detection from Computerized Tomography (CT) images with minimally label training data(SPIE, 2020-03-16) Nguyen, Phuong; Chapman, David; Menon, Sumeet; Morris, Michael; Yesha, YelenaArtificial intelligence (AI) has great potential in medical imaging to augment the clinician as a virtual radiology assistant (vRA) through enriching information and providing clinical decision support. Deep learning is a type of AI that has shown promise in performance for Computer Aided Diagnosis (CAD) tasks. A current barrier to implementing deep learning for clinical CAD tasks in radiology is that it requires a training set to be representative and as large as possible in order to generalize appropriately and achieve high accuracy predictions. There is a lack of available, reliable, discretized and annotated labels for computer vision research in radiology despite the abundance of diagnostic imaging examinations performed in routine clinical practice. Furthermore, the process to create reliable labels is tedious, time consuming and requires expertise in clinical radiology. We present an Active Semi-supervised Expectation Maximization (ASEM) learning model for training a Convolutional Neural Network (CNN) for lung cancer screening using Computed Tomography (CT) imaging examinations. Our learning model is novel since it combines Semi-supervised learning via the Expectation-Maximization (EM) algorithm with Active learning via Bayesian experimental design for use with 3D CNNs for lung cancer screening. ASEM simultaneously infers image labels as a latent variable, while predicting which images, if additionally labeled, are likely to improve classification accuracy. The performance of this model has been evaluated using three publicly available chest CT datasets: Kaggle2017, NLST, and LIDC-IDRI. Our experiments showed that ASEM-CAD can identify suspicious lung nodules and detect lung cancer cases with an accuracy of 92% (Kaggle17), 93% (NLST), and 73% (LIDC) and Area Under Curve (AUC) of 0.94 (Kaggle), 0.88 (NLST), and 0.81 (LIDC). These performance numbers are comparable to fully supervised training, but use only slightly more than 50% of the training data labels .Item Active Semi-Supervised Learning via Bayesian Experimental Design for Lung Cancer Classification Using Low Dose Computed Tomography Scans(MDPI, 2023-03-15) Nguyen, Phuong; Rathod, Ankita; Chapman, David; Prathapan, Smriti; Menon, Sumeet; Morris, Michael; Yesha, YelenaWe introduce an active, semisupervised algorithm that utilizes Bayesian experimental design to address the shortage of annotated images required to train and validate Artificial Intelligence (AI) models for lung cancer screening with computed tomography (CT) scans. Our approach incorporates active learning with semisupervised expectation maximization to emulate the human in the loop for additional ground truth labels to train, evaluate, and update the neural network models. Bayesian experimental design is used to intelligently identify which unlabeled samples need ground truth labels to enhance the model’s performance. We evaluate the proposed Active Semi-supervised Expectation Maximization for Computer aided diagnosis (CAD) tasks (ASEM-CAD) using three public CT scans datasets: the National Lung Screening Trial (NLST), the Lung Image Database Consortium (LIDC), and Kaggle Data Science Bowl 2017 for lung cancer classification using CT scans. ASEM-CAD can accurately classify suspicious lung nodules and lung cancer cases with an area under the curve (AUC) of 0.94 (Kaggle), 0.95 (NLST), and 0.88 (LIDC) with significantly fewer labeled images compared to a fully supervised model. This study addresses one of the significant challenges in early lung cancer screenings using low-dose computed tomography (LDCT) scans and is a valuable contribution towards the development and validation of deep learning algorithms for lung cancer screening and other diagnostic radiology examinations.Item CCS-GAN: COVID-19 CT-scan classification with very few positive training images(Springer, 2023-04-17) Menon, Sumeet; Mangalagiri, Jayalakshmi; Galita, Josh; Morris, Michael; Saboury, Babak; Yesha, Yaacov; Yesha, Yelena; Nguyen, Phuong; Gangopadhyay, Aryya; Chapman, DavidWe present a novel algorithm that is able to classify COVID-19 pneumonia from CT Scan slices using a very small sample of training images exhibiting COVID19 pneumonia in tandem with a larger number of normal images. This algorithm is able to achieve high classification accuracy using as few as 10 positive training slices (from 10 positive cases), which to the best of our knowledge is one order of magnitude fewer than the next closest published work at the time of writing. Deep learning with extremely small positive training volumes is a very difficult problem and has been an important topic during the COVID19 pandemic, because for quite some time it was difficult to obtain large volumes of COVID-19 positive images for training. Algorithms that can learn to screen for diseases using few examples are an important area of research. We present the Cycle Consistent Segmentation Generative Adversarial Network (CCS-GAN). CCS-GAN combines style transfer with pulmonary segmentation and relevant transfer learning from negative images in order to create a larger volume of synthetic positive images for the purposes of improving diagnostic classification performance. The performance of a VGG-19 classifier plus CCS-GAN was trained using a small sample of positive image slices ranging from at most 50 down to as few as 10 COVID-19 positive CT-scan images. CCS-GAN achieves high accuracy with few positive images and thereby greatly reduces the barrier of acquiring large training volumes in order to train a diagnostic classifier for COVID-19.Item Generating Realistic COVID19 X-rays with a Mean Teacher + Transfer Learning GAN(IEEE, 2020-09-26) Menon, Sumeet; Galita, Joshua; Chapman, David; Gangopadhyay, Aryya; Mangalagiri, Jayalakshmi; Nguyen, Phuong; Yesha, Yaacov; Yesha, Yelena; Saboury, Babak; Morris, MichaelCOVID-19 is a novel infectious disease responsible for over 800K deaths worldwide as of August 2020. The need for rapid testing is a high priority and alternative testing strategies including X-ray image classification are a promising area of research. However, at present, public datasets for COVID19 x-ray images have low data volumes, making it challenging to develop accurate image classifiers. Several recent papers have made use of Generative Adversarial Networks (GANs) in order to increase the training data volumes. But realistic synthetic COVID19 X-rays remain challenging to generate. We present a novel Mean Teacher + Transfer GAN (MTT-GAN) that generates COVID19 chest X-ray images of high quality. In order to create a more accurate GAN, we employ transfer learning from the Kaggle Pneumonia X-Ray dataset, a highly relevant data source orders of magnitude larger than public COVID19 datasets. Furthermore, we employ the Mean Teacher algorithm as a constraint to improve stability of training. Our qualitative analysis shows that the MTT-GAN generates X-ray images that are greatly superior to a baseline GAN and visually comparable to real X-rays. Although board-certified radiologists can distinguish MTT-GAN fakes from real COVID19 X-rays. Quantitative analysis shows that MTT-GAN greatly improves the accuracy of both a binary COVID19 classifier as well as a multi-class Pneumonia classifier as compared to a baseline GAN. Our classification accuracy is favourable as compared to recently reported results in the literature for similar binary and multi-class COVID19 screening tasks.Item IDIOMS: Infectious Disease Imaging Outbreak Monitoring System(ACM, 2020-11) Gangopadhyay, Aryya; Morris, Michael; Saboury, Babak; Siegel, Eliot; Yesha, YelenaIn this commentary, we propose a framework for convergence accelerator research leveraging AI models with medical images for effective diagnosis, monitoring, and treatment of diseases with pandemic potential. The goal is to create a novel Infectious Disease Imaging Outbreak Monitoring System (IDIOMS) to prospectively anticipate, identify, and characterize potential infectious disease outbreaks across a population of patients in real-time as patients receive medical imaging examinations. IDIOMS will provide critical surveillance before an outbreak is widely identified and before adequate testing resources are available. This can be achieved through the creation of an infectious disease medical imaging library resource and the implementation of a computer vision approach to infectious disease medical imaging classification using Artificial Intelligence (AI). Improved characterization of Infectious Disease (ID) by medical imaging could provide an earlier indicator for a recurrent or future pandemic, even before the underlying pathogen is identified clinically or before an alternative commercially available reliable laboratory test can be developed and distributed. Such an infectious disease medical imaging classifier could have altered the course of the COVID-19 pandemic caused by SARS-CoV-2.Item Toward Generating Synthetic CT Volumes using a 3D-Conditional Generative Adversarial Network(IEEE) Mangalagiri, Jayalakshmi; Chapman, David; Gangopadhyay, Aryya; Yesha, Yaacov; Galita, Joshua; Menon, Sumeet; Yesha, Yelena; Saboury, Babak; Morris, Michael; Nguyen, PhuongWe present a novel conditional Generative Adversarial Network (cGAN) architecture that is capable of generating 3D Computed Tomography scans in voxels from noisy and/or pixelated approximations and with the potential to generate full synthetic 3D scan volumes. We believe conditional cGAN to be a tractable approach to generate 3D CT volumes, even though the problem of generating full resolution deep fakes is presently impractical due to GPU memory limitations. We present results for autoencoder, denoising, and depixelating tasks which are trained and tested on two novel COVID19 CT datasets. Our evaluation metrics, Peak Signal to Noise ratio (PSNR) range from 12.53 - 46.46 dB, and the Structural Similarity index ( SSIM) range from 0.89 to 1.