TY - JOUR TI - Defeating speaker recognition systems using adversarial examples DO - https://doi.org/doi:10.7282/t3-xam6-w034 PY - 2019 AB - Voice-user interface (VUI) has exploded in popularity due to the recent advances in automatic speech recognition (ASR). Such an interface has been integrated into various platforms, such as mobile phones, smart appliances and stand-alone voice assistants, and provides a convenient way for people to interact with them. Moreover, with such a convenient VUI, voice speaker recognition system that could be seamlessly integrated to facilitate security-related applications or personalized services has gained considerable attention recently. However, existing studies have shown that deep neural networks (DNNs), as the computational core of speaker recognition systems, are vulnerable to adversarial examples, which strategically perturbed input examples leading to a fraudulent prediction. In this thesis, we demonstrate that it is possible to construct adversarial examples against speaker recognition systems. Particularly, we use several gradient/iterative-based methods (e.g, fast gradient method (FGM), fast gradient sign method (FGSM), and basic iterative method (BIM)) and our proposed optimization-based approach for crafting adversarial perturbations for both untargeted and targeted attacks. The untargeted attack could make the system give an incorrect prediction while the targeted attack could change the prediction result to any user desired by an adversary. Our experiments using a public speech dataset of 109 people show that with only partial knowledge of the speaker recognition system, the adversary is capable of reducing the system accuracy by over 60% for the untargeted attack. For the targeted attack, our approach could achieve an overall success rate of over 60%. With full knowledge of the system, the adversary can reduce the system accuracy by 65%, while still keeping the crafted speech unnoticeable to humans. KW - Electrical and Computer Engineering KW - Speaker recognition KW - Speaker identification KW - Machine learning KW - Adversarial example KW - Neural networks (Computer science) -- Security measures LA - English ER -