P3DM 2008

From Openresearch
Revision as of 22:19, 14 October 2008 by 127.0.0.1 (talk) (Event created)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
P3DM 2008
Practical Privacy - Preserving Data Mining
Dates Apr 26, 2008 (iCal) - Apr 26, 2008
Homepage: www.csee.umbc.edu/~kunliu1/p3dm08
Location
Location: Atlanta, USA
Loading map...

Important dates
Submissions: Dec 28, 2007
Table of Contents


Governmental and commercial organizations today capture large amounts of data on individual behavior and increasingly apply data mining to it. This has raised serious concerns for individuals�?? civil liberties as well as their economic well being. In 2003, concerns over the U.S.  Total Information Awareness (also known as Terrorism Information Awareness) project led to the introduction of a bill in the U.S. Senate that would have banned any data mining programs in the U.S. Department of Defense. Debates over the need for privacy protection vs. service to national security and business interests were held in newspapers, magazines, research articles, television talk shows and elsewhere. Currently, both the public and businesses seem to hold polarized opinions: There are those who think an organization can analyze information it has gathered for any purpose it desires and those who think that every type of data mining should be forbidden. Both positions do little merit to the issue because the former promotes public fear (notably, Sun's Scott McNealy '99 remark �??You have no privacy, get over it!�??) and the latter overly restrictive legislation.

The truth of the matter is not that technology has progressed to the point where privacy is not feasible, but rather the opposite: privacy preservation technology has got to advance to the point where privacy would no longer rely on accidental lack of information but rather on intentional and engineered inability to know. This belief is at the heart of privacy-preserving data mining. Pioneered by Agrawal & Srikant and Lindell & Pinkas' work from 2000, there has been an explosive number of publications in this area. Many privacy-preserving data mining techniques have been proposed, questioned, and improved. However, compared with the active and fruitful research in academia, applications of privacy-preserving data mining for real-life problems are quite rare. Without practice, it is feared that research in privacy-preserving data mining will stagnate. Furthermore, lack of practice may hint to serious problems with the underlying concepts of privacy-preserving data mining. Identifying and rectifying these problems must be a top priority for advancing the field.

The goal of this workshop is to foster discussion regarding the practice of privacy, and especially privacy-preserving data mining. We invite a multi-disciplinary view of the challenges and opportunities of privacy-preserving data mining in practical scenarios. We seek innovative work in the following broad categories:

   1. Privacy for specific domains
      It is clear that privacy is a domain dependent concept: homeland security, healthcare, business secrecy, entertainment, web 2.0, and ubiquitous computing are each different and pose different privacy requirements. Specifically we are interested in:
          * Privacy modeling for specific domains
          * Privacy evaluation techniques and privacy metrics
   2. Economic and legal aspects of privacy protection
      Privacy is no less a societal and economical concept than it is a technological challenge. However, the majority of work on privacy-preserving data mining does not focus on these aspects. Of specific interest are:
          * The economics of privacy
          * Modeling of privacy legislation and automated proofs of adherence
          * Privacy and utility trade-off
   3. Performance aware privacy
      Some of the technological impediments to privacy are rooted in the performance of the algorithms. Of interest is work focused on performance issues in privacy preservation:
          * Efficiency improvements to known algorithms
          * Scalable privacy models
          * Privacy and data streams
   4. Privacy preservation applications
      We invite papers describing applications which rely on privacy preservation which have been tested on actual benchmarks or in production scenarios.

According to an independent March 2006 report by Forrester Research, "Protecting Private Data with Data Masking," 35 percent of corporations will start using data masking for private data by 2010. To be able to do so in a meaningful and efficient manner, a much clearer understanding of the practice of privacy preservation is needed. The workshop aims to enhance the understanding of privacy-preserving data mining from technical, economic and legal perspectives. It will create a unique opportunity for data mining researchers, security and privacy specialists, and industry experts to share their ideas, and to facilitate the creation of real-world applications. 
	

This CfP was obtained from WikiCFP

Facts about "P3DM 2008"
AcronymP3DM 2008 +
End dateApril 26, 2008 +
Event typeConference +
Has coordinates33° 44' 56", -84° 23' 25"Latitude: 33.748991666667
Longitude: -84.390263888889
+
Has location cityAtlanta +
Has location countryCategory:USA +
Homepagehttp://www.csee.umbc.edu/~kunliu1/p3dm08 +
IsAEvent +
Start dateApril 26, 2008 +
Submission deadlineDecember 28, 2007 +
TitlePractical Privacy - Preserving Data Mining +