Differential Privacy in Responsible AI

Benefits

Drawbacks

• Resistant to privacy attacks. • Compositional. One can add the privacy loss for multiple analyses on the same dataset.

• Not suitable for small datasets. • Repeated application of the algorithm increases privacy loss. • Reduces accuracy with a low privacy budget.

How organizations like Apple and Google are implementing DP3

Apple uses local differential privacy, computed on individual devices before being collected by the central server.

Google shares random samples of aggregated and anonymized historical traffic statistics that are differentially privatized by noise before data transmission.

Microsoft has developed local DP mechanisms for collecting counter data for their basic analytical tasks.

Conclusion Data privacy is often overlooked when creating a machine learning algorithm. With the ubiquitous data collection around us, extracting private information from a dataset that does not have privacy built into it is now easier than ever. Differential privacy allows organizations to customize the privacy level and leads attackers to access only partially correct data.

References 1 Real-life Examples of Discriminating Artificial Intelligence | by Terence Shin | Towards Data Science 2 List of Data Breaches and Cyber Attacks in May 2022 | 49.8 Million Records (itgovernance.co.uk) 3 Book: Responsible AI by Sray Agarwal and Shashin Mishra What is Differential Privacy? | Georgian Partners Privacy-preserving logistic regression Kamalika Chaudhuri Information Theory and Applications University of California, San Diego Microsoft SmartNoise Differential Privacy Machine Learning Case Studies https://research.aimultiple.com/differential-privacy/ What is Differential Privacy and How does it Work? | Analytics Steps

1 6

© 2023 Fractal Analytics Inc. All rights reserved

Made with FlippingBook - PDF hosting