Multi-Speaker Conversational Audio Deepfake: Taxonomy, Dataset and Pilot Study

dc.contributor.authorAhmed, Alabi
dc.contributor.authorJaneja, Vandana
dc.contributor.authorPurushotham, Sanjay
dc.date.accessioned2026-03-05T19:36:34Z
dc.date.issued2026-01-30
dc.description2025 IEEE International Conference on Data Mining, ICDM 2025, November 12-15,2025, Washington DC, USA
dc.description.abstractThe rapid advances in text-to-speech (TTS) technologies have made audio deepfakes increasingly realistic and accessible, raising significant security and trust concerns. While existing research has largely focused on detecting single-speaker audio deepfakes, real-world malicious applications with multi-speaker conversational settings is also emerging as a major underexplored threat. To address this gap, we propose a conceptual taxonomy of multi-speaker conversational audio deepfakes, distinguishing between partial manipulations (one or multiple speakers altered) and full manipulations (entire conversations synthesized). As a first step, we introduce a new Multi-speaker Conversational Audio Deepfakes Dataset (MsCADD) of 2,830 audio clips containing real and fully synthetic two-speaker conversations, generated using VITS and SoundStorm-based NotebookLM models to simulate natural dialogue with variations in speaker gender, and conversational spontaneity. MsCADD is limited to text-to-speech (TTS) types of deepfake. We benchmark three neural baseline models; LFCC-LCNN, RawNet2, and Wav2Vec 2.0 on this dataset and report performance in terms of F1 score, accuracy, true positive rate (TPR), and true negative rate (TNR). Results show that these baseline models provided a useful benchmark, however, the results also highlight that there is a significant gap in multi-speaker deepfake research in reliably detecting synthetic voices under varied conversational dynamics. Our dataset and benchmarks provide a foundation for future research on deepfake detection in conversational scenarios, which is a highly underexplored area of research but also a major area of threat to trustworthy information in audio settings. The MsCADD dataset is publicly available to support reproducibility and benchmarking by the research community.
dc.description.urihttp://arxiv.org/abs/2602.00295
dc.format.extent6 pages
dc.genreconference papers and proceedings
dc.genrepreprints
dc.identifierdoi:10.13016/m2ogck-hyxr
dc.identifier.urihttps://doi.org/10.48550/arXiv.2602.00295
dc.identifier.urihttp://hdl.handle.net/11603/42160
dc.language.isoen
dc.relation.isAvailableAtThe University of Maryland, Baltimore County (UMBC)
dc.relation.ispartofUMBC Faculty Collection
dc.relation.ispartofUMBC Information Systems Department
dc.relation.ispartofUMBC Student Collection
dc.rightsThis item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
dc.subjectComputer Science - Artificial Intelligence
dc.subjectElectrical Engineering and Systems Science - Audio and Speech Processing
dc.subjectComputer Science - Sound
dc.subjectUMBC Cybersecurity Institute
dc.subjectUMBC Multi-Data (MData) Lab
dc.subjectUMBC High Performance Computing Facility
dc.titleMulti-Speaker Conversational Audio Deepfake: Taxonomy, Dataset and Pilot Study
dc.typeText
dcterms.creatorhttps://orcid.org/0000-0003-0130-6135

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
2602.00295v1.pdf
Size:
211.83 KB
Format:
Adobe Portable Document Format