Staff View
Harnessing adversarial samples against voice assistants

Descriptive

TitleInfo
Title
Harnessing adversarial samples against voice assistants
Name (type = personal)
NamePart (type = family)
Wu
NamePart (type = given)
Yi
NamePart (type = date)
1996-
DisplayForm
Yi Wu
Role
RoleTerm (authority = RULIB)
author
Name (type = personal)
NamePart (type = family)
Chen
NamePart (type = given)
Yingying
DisplayForm
Yingying Chen
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
chair
Name (type = personal)
NamePart (type = family)
Yuan
NamePart (type = given)
Bo
DisplayForm
Bo Yuan
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = personal)
NamePart (type = family)
Howard
NamePart (type = given)
Richard E.
DisplayForm
Richard E. Howard
Affiliation
Advisory Committee
Role
RoleTerm (authority = RULIB)
internal member
Name (type = corporate)
NamePart
Rutgers University
Role
RoleTerm (authority = RULIB)
degree grantor
Name (type = corporate)
NamePart
School of Graduate Studies
Role
RoleTerm (authority = RULIB)
school
TypeOfResource
Text
Genre (authority = marcgt)
theses
OriginInfo
DateCreated (encoding = w3cdtf); (qualifier = exact)
2019
DateOther (encoding = w3cdtf); (qualifier = exact); (type = degree)
2019-05
CopyrightDate (encoding = w3cdtf); (qualifier = exact)
2019
Language
LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Abstract
With the widespread use of machine learning techniques in many areas of our life (e.g., recognizing images, videos, and voice commands), their security vulnerabilities have become an increasing concern of the public. For instance, a deep neural network (DNN), as a popular class of machine learning techniques, has been proved defenseless against deliberately designed adversarial attacks. Existing studies mainly focus on feeding adversarial images to fool an image classification system. They can make the system misclassify the images as any target object by adding carefully-crafted perturbations.
In this thesis, we propose an adversarial attack against automatic speech recognition (ASR) systems (e.g., Siri and Google Assistant). We demonstrate that an adversary can embed malicious voice commands into regular songs, and these embedded commands can be recognized by the ASR system but go unnoticed by humans. Particularly, we use a genetic-based algorithm to craft the original song with the probability density function (PDF) identifier of the malicious comments, allowing the crafted song to be recognized as the embedded comments. Evaluation demonstrates that the commands in the crafted songs can be successfully recognized by the ASR system (e.g., Kaldi) with an average success rate of over 58%. By calculating the covariance between the crafted song and the original one, we show that the similarity between them is over 95%, making it hard to be noticed by humans.
Subject (authority = ETD-LCSH)
Topic
Machine learning
Subject (authority = RUETD)
Topic
Electrical and Computer Engineering
Subject (authority = ETD-LCSH)
Topic
Automatic speech recognition
Subject (authority = local)
Topic
Adversarial sample
RelatedItem (type = host)
TitleInfo
Title
Rutgers University Electronic Theses and Dissertations
Identifier (type = RULIB)
ETD
Identifier
ETD_9872
PhysicalDescription
Form (authority = gmd)
InternetMediaType
application/pdf
InternetMediaType
text/xml
Extent
1 online resource (vi, 32 pages) : illustrations
Note (type = degree)
M.S.
Note (type = bibliography)
Includes bibliographical references
RelatedItem (type = host)
TitleInfo
Title
School of Graduate Studies Electronic Theses and Dissertations
Identifier (type = local)
rucore10001600001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/t3-2zvt-vy33
Genre (authority = ExL-Esploro)
ETD graduate
Back to the top

Rights

RightsDeclaration (ID = rulibRdec0006)
The author owns the copyright to this work.
RightsHolder (type = personal)
Name
FamilyName
Wu
GivenName
Yi
Role
Copyright Holder
RightsEvent
Type
Permission or license
DateTime (encoding = w3cdtf); (qualifier = exact); (point = start)
2019-04-15 16:36:23
AssociatedEntity
Name
Yi Wu
Role
Copyright holder
Affiliation
Rutgers University. School of Graduate Studies
AssociatedObject
Type
License
Name
Author Agreement License
Detail
I hereby grant to the Rutgers University Libraries and to my school the non-exclusive right to archive, reproduce and distribute my thesis or dissertation, in whole or in part, and/or my abstract, in whole or in part, in and from an electronic format, subject to the release date subsequently stipulated in this submittal form and approved by my school. I represent and stipulate that the thesis or dissertation and its abstract are my original work, that they do not infringe or violate any rights of others, and that I make these grants as the sole owner of the rights to my thesis or dissertation and its abstract. I represent that I have obtained written permissions, when necessary, from the owner(s) of each third party copyrighted matter to be included in my thesis or dissertation and will supply copies of such upon request by my school. I acknowledge that RU ETD and my school will not distribute my thesis or dissertation or its abstract if, in their reasonable judgment, they believe all such rights have not been secured. I acknowledge that I retain ownership rights to the copyright of my work. I also retain the right to use all or part of this thesis or dissertation in future works, such as articles or books.
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
ETD
OperatingSystem (VERSION = 5.1)
windows xp
CreatingApplication
Version
1.5
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2019-04-18T21:20:56
DateCreated (point = end); (encoding = w3cdtf); (qualifier = exact)
2019-04-18T21:20:56
ApplicationName
pdfTeX-1.40.18
Back to the top
Version 8.3.13
Rutgers University Libraries - Copyright ©2021