Distributed GPU Computing For Deep Learning In Proton Beam Therapy For Cancer Treatment
Loading...
Links to Files
Permanent Link
Author/Creator
Author/Creator ORCID
Date
2023-01-01
Type of Work
Department
Information Systems
Program
Information Systems
Citation of Original Publication
Rights
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see http://aok.lib.umbc.edu/specoll/repro.php or contact Special Collections at speccoll(at)umbc.edu
Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
Distribution Rights granted to UMBC by the author.
Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
Subjects
Abstract
Proton beam therapy is an effective form of cancer treatment where the radiation dosage is given a right amount just to affect the cancer cells. However to observe this process, medical imaging techniques using compton cameras are implemented. The current image reconstruction algorithm in compton camera is not effective and produces a noisy output image. Previous work has demonstrated an effective reconstruction algorithm using the deep neural networks. Due the data size and nature of the neural network the time taken to achieve good results of accuracy is long and there were also problems with performance.This thesis work demonstrates the implementation of parallelized training using TensorFlow over a network of different GPUs available at the UMBC?s High Performance Computing Facility. Also, This thesis also discusses how scaling needs to happen and what are the key hyper parameters responsible for faster training of the neural network and good accuracy of the model. Along with that, this thesis demonstrates several performance studies showing linear scalability, increase in performance, and bottle necks in parallelized deep learning.