Real-time Automatic Hyperparameter Tuning for Deep Learning in Serverless Cloud

Author/Creator ORCID




Computer Science and Electrical Engineering


Computer Science

Citation of Original Publication


Access limited to the UMBC community. Item may possibly be obtained via Interlibrary Loan thorugh a local library, pending author/copyright holder's permission.
This item may be protected under Title 17 of the U.S. Copyright Law. It is made available by UMBC for non-commercial research and education. For permission to publish or reproduce, please see or contact Special Collections at speccoll(at)


Deep Neural Networks are used to solve many problems. The importance of the subject can be demonstrated by the fact that the 2019 Turing Award was given to Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, the godfathers of AI and Neural Networks. In spite of the numerous advancements in the field, most of the machine learning models are being tuned manually. Experienced Data Scientists spend their valuable time tuning hyperparameters such as dropout rate, loss, or number of neurons. To solve this problem, we have implemented a flexible automatic real-time hyperparameter tuning methodology for an arbitrary Deep Neural Network (DNN) written in Python and Keras. We have also utilized several state-of-the-art cloud services, such as trigger based serverless computing (Lambda) and EC2 GPU spot instances, to implement automation, reliability and scalability. We have been using the available state-of-the-art algorithms for random search in the grid, Bayesian optimization, static code analysis to find the available hyperparameter candidates and code refactoring to tune those parameters in the Cloud. In this dissertations we have developed an automatic platform for hyperparameter optimization. Our approach works for an arbitrary neural network model written in Python and Keras. We have also integrated our methodology with trigger-based serverless cloud, providing infinite scalability, and reliability to the system. We have designed and developed an innovative cloud architecture for our methodology, orchestrating model tuning at scale, monitoring our experiments and utilizing numerous state-of-the-art cloud services. Our novel approach detects potential hyperparameters automatically from the source code, updates the original model to tune the parameters, runs the evaluation in the Cloud on spot instances, finds the optimal hyperparameters, and saves the results in the No-SQL database. We hope our platform will help millions of Data Scientists and researchers save their valuable time and money by tuning their advanced Machine Learning models faster.