CoughNet-V2: A Scalable Multimodal DNN Framework for Point-of-Care Edge Devices to Detect Symptomatic COVID-19 Cough

dc.contributor.authorRashid, Hasib-Al
dc.contributor.authorSajadi, Mohammad M.
dc.contributor.authorMohsenin, Tinoosh
dc.date.accessioned2022-05-11T13:21:30Z
dc.date.available2022-05-11T13:21:30Z
dc.date.issued2022-04-01
dc.descriptionConference: 2022 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT) Houston, TX, USA 10-11 March 2022en_US
dc.description.abstractWith the emergence of COVID-19 pandemic, new attention has been given to different acoustic bio-markers of the respiratory disorders. Deep Neural Network (DNN) has become very popular with the audio classification task due to its impressive performance for speech detection, audio event classification etc. This paper presents CoughNet-V2 - a scalable multimodal DNN framework to detect symptomatic COVID-19 cough. The framework was designed to be implemented on point-of-care edge devices to help the doctors at pre-screening stage for COVID-19 detection. A crowd-sourced multimodal data resource which contains subjects’ cough audio along with other relevant medical information was used to design the CoughNet-V2 framework. CoughNet-V2 shows multimodal integration of cough audio along with medical records improves the classification performance than that of any unimodal frameworks. Proposed CoughNet-V2 achieved an area-under-curve (AUC) of 88.9% for the binary classification task of symptomatic COVID-19 cough detection. Finally, measurement of the deployment attributes of the CoughNet-V2 model onto processing components of an NVIDIA TX2 development board is presented as a proposition to bring the healthcare system to consumers’ fingertips.Clinical relevance—CoughNet-V2 will help medical practitioners to asses whether the patients need intensive medical help without physically interacting with them.en_US
dc.description.sponsorshipWe acknowledge the support of the University of Maryland, Baltimore, Institute for Clinical Translational Research (ICTR) and the National Center for Advancing Translational Sciences (NCATS) Clinical Translational Science Award (CTSA) grant number UL1TR003098.en_US
dc.description.urihttps://ieeexplore.ieee.org/document/9744064en_US
dc.format.extent4 pagesen_US
dc.genreconference papers and proceedingsen_US
dc.genrepostprintsen_US
dc.identifierdoi:10.13016/m2r2ol-is0z
dc.identifier.citationH. -A. Rashid, M. M. Sajadi and T. Mohsenin, "CoughNet-V2: A Scalable Multimodal DNN Framework for Point-of-Care Edge Devices to Detect Symptomatic COVID-19 Cough," 2022 IEEE Healthcare Innovations and Point of Care Technologies (HI-POCT), 2022, pp. 37-40, doi: 10.1109/HI-POCT54491.2022.9744064.en_US
dc.identifier.urihttps://doi.org/10.1109/HI-POCT54491.2022.9744064
dc.identifier.urihttp://hdl.handle.net/11603/24683
dc.language.isoen_USen_US
dc.publisherIEEEen_US
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Computer Science and Electrical Engineering Department Collection
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Student Collection
dc.rights© 2022 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.titleCoughNet-V2: A Scalable Multimodal DNN Framework for Point-of-Care Edge Devices to Detect Symptomatic COVID-19 Coughen_US
dc.typeTexten_US
dcterms.creatorhttps://orcid.org/0000-0002-9983-6929en_US
dcterms.creatorhttps://orcid.org/0000-0001-5551-2124en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
CoughNet_V2 (1).pdf
Size:
709.24 KB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.56 KB
Format:
Item-specific license agreed upon to submission
Description: