Staff View
Predictive Anonymization: Utility-Preserving Publishing of Sparse Recommendation Data

Descriptive

Language
LanguageTerm (authority = ISO 639-3:2007); (type = text)
English
Genre (authority = RULIB-FS)
Other
Genre (authority = marcgt)
technical report
PhysicalDescription
InternetMediaType
application/pdf
Extent
12 p.
Note (type = special display note)
Technical report DCS-TR-647
Name (type = corporate); (authority = RutgersOrg-School)
NamePart
School of Arts and Sciences (SAS) (New Brunswick)
Name (type = corporate); (authority = RutgersOrg-Department)
NamePart
Computer Science (New Brunswick)
TypeOfResource
Text
TitleInfo
Title
Predictive Anonymization: Utility-Preserving Publishing of
Sparse Recommendation Data
Name (type = personal)
NamePart (type = family)
Chang
NamePart (type = given)
Chih-Cheng
Affiliation
Computer Science (New Brunswick)
Role
RoleTerm (type = text); (authority = marcrt)
author
Name (type = personal)
NamePart (type = family)
Thompson
NamePart (type = given)
Brian
Affiliation
Computer Science (New Brunswick)
Role
RoleTerm (type = text); (authority = marcrt)
author
Name (type = personal)
NamePart (type = family)
Wang
NamePart (type = given)
Hui (Wendy)
Affiliation
Stevens Institute of Technology
Role
RoleTerm (type = text); (authority = marcrt)
author
Name (type = personal)
NamePart (type = family)
Yao
NamePart (type = given)
Danfeng
Affiliation
Computer Science (New Brunswick)
Role
RoleTerm (type = text); (authority = marcrt)
author
OriginInfo
DateCreated (encoding = w3cdtf); (qualifier = exact); (keyDate = yes)
2009-04
Abstract (type = abstract)
Recently, recommender systems have been introduced to predict user preferences for products or services. In order to seek better prediction techniques, data owners of recommender systems such as Netflix sometimes make their customers’ reviews available to the public, which raises serious privacy concerns. With only a small amount of knowledge about individuals and their ratings to some items in a recommender system, an adversary may easily identify the users and breach their privacy. Unfortunately, most of the existing privacy models (e.g., k-anonymity) cannot be directly applied to recommender systems. In this paper, we study the problem of privacy-preserving publishing of recommendation datasets. We represent recommendation data as a bipartite graph, and define several attacks on the graph that can re-identify users and determine their rated items and ratings. To deal with these attacks, we give formal privacy definitions in recommender systems. We develop a robust and efficient anonymization algorithm, Predictive Anonymization, to achieve the privacy goals. Our experimental results show that Predictive Anonymization can prevent the attacks with very little impact to prediction accuracy
RelatedItem (type = host)
TitleInfo
Title
Computer Science (New Brunswick)
Identifier (type = local)
rucore21032500001
Location
PhysicalLocation (authority = marcorg); (displayLabel = Rutgers, The State University of New Jersey)
NjNbRU
Identifier (type = doi)
doi:10.7282/T3ZK5M3B
Back to the top

Rights

RightsDeclaration (AUTHORITY = rightsstatements.org); (TYPE = IN COPYRIGHT); (ID = http://rightsstatements.org/vocab/InC/1.0/)
This Item is protected by copyright and/or related rights.You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use.For other uses you need to obtain permission from the rights-holder(s).
Copyright
Status
Copyright protected
Availability
Status
Open
Reason
Permission or license
Back to the top

Technical

RULTechMD (ID = TECHNICAL1)
ContentModel
Document
Back to the top
Version 8.3.10
Rutgers University Libraries - Copyright ©2019