LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract (type = abstract)
Voice-user interface (VUI) has exploded in popularity due to the recent advances in automatic speech recognition (ASR). Such an interface has been integrated into various platforms, such as mobile phones, smart appliances and stand-alone voice assistants, and provides a convenient way for people to interact with them. Moreover, with such a convenient VUI, voice speaker recognition system that could be seamlessly integrated to facilitate security-related applications or personalized services has gained considerable attention recently. However, existing studies have shown that deep neural networks (DNNs), as the computational core of speaker recognition systems, are vulnerable to adversarial examples, which strategically perturbed input examples leading to a fraudulent prediction.
In this thesis, we demonstrate that it is possible to construct adversarial examples against speaker recognition systems. Particularly, we use several gradient/iterative-based methods (e.g, fast gradient method (FGM), fast gradient sign method (FGSM), and basic iterative method (BIM)) and our proposed optimization-based approach for crafting adversarial perturbations for both untargeted and targeted attacks. The untargeted attack could make the system give an incorrect prediction while the targeted attack could change the prediction result to any user desired by an adversary. Our experiments using a public speech dataset of 109 people show that with only partial knowledge of the speaker recognition system, the adversary is capable of reducing the system accuracy by over 60% for the untargeted attack. For the targeted attack, our approach could achieve an overall success rate of over 60%. With full knowledge of the system, the adversary can reduce the system accuracy by 65%, while still keeping the crafted speech unnoticeable to humans.
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.