Gradient Inversion Attacks on Acoustic Signals: Revealing Security Risks in Audio Recognition Systems

dc.contributor.authorOvi, Pretom Roy
dc.contributor.authorGangopadhyay, Aryya
dc.date.accessioned2024-04-10T19:05:50Z
dc.date.available2024-04-10T19:05:50Z
dc.date.issued2024-03-18
dc.descriptionICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14-19 April 2024.
dc.description.abstractWith a greater emphasis on data confidentiality and legislation, distributed training and collaborative machine learning algorithms are being developed to protect sensitive private data. Gradient exchange has become a widely used practice in those multi-node machine learning systems. But with the advent of gradient inversion attacks, it is already established that private training data can be revealed from the gradients. Gradient inversion attacks covertly spy on gradient updates and backtrack from the gradients to obtain information about the raw data. Although this attack has been widely studied in computer vision and natural language processing tasks, understanding the impact of this attack on acoustic signals still requires a comprehensive investigation. To the best of our knowledge, we are the first to explore gradient inversion attacks on acoustic signals by extracting the speakers’ voices from an audio recognition system. Here, we design a new application of gradient inversion attack to retrieve the audio signal used for training the deep learning model, irrespective of whether the audio has undergone conversion into mel-spectrogram or MFCC representations prior to feed to neural network. Experimental results demonstrate the capability of our attack method to extract the input vectors of the audio data from the gradients, which highlight the security risks in revealing the sensitive audio data from highly secured systems. We also discuss several possible strategies as countermeasures and their effectiveness to prevent the attack.
dc.description.sponsorshipWe acknowledge the support of the U.S. Army Grant #W911NF21-20076 and NSF grant #1923982.
dc.description.urihttps://ieeexplore.ieee.org/abstract/document/10445809
dc.format.extent5 pages
dc.genreconference papers and proceedings
dc.identifierdoi:10.13016/m2ysrd-ncad
dc.identifier.citationOvi, Pretom Roy, and Aryya Gangopadhyay. “Gradient Inversion Attacks on Acoustic Signals: Revealing Security Risks in Audio Recognition Systems.” ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), April 2024, 4835–39. https://doi.org/10.1109/ICASSP48485.2024.10445809.
dc.identifier.urihttps://doi.org/10.1109/ICASSP48485.2024.10445809
dc.identifier.urihttp://hdl.handle.net/11603/33002
dc.language.isoen_US
dc.publisherIEEE
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Information Systems Department
dc.relation.ispartofUMBC Student Collection
dc.subjectSecurity
dc.subjectVectors
dc.subjectAcoustics
dc.subjectAdversarial attacks
dc.subjectAudio
dc.subjectData privacy
dc.subjectSystem performance
dc.subjectTask analysis
dc.subjectTraining data
dc.titleGradient Inversion Attacks on Acoustic Signals: Revealing Security Risks in Audio Recognition Systems
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0002-7553-7932

Files